Test Report: Docker_Linux_containerd 15565

                    
                      b70896c80ee4e66ab69b71a68ac4d59d2145555e:2023-01-08:27335
                    
                

Test fail (17/268)

x
+
TestPreload (364.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (53.962390202s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-205820 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-205820 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.769666125s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6
E0108 21:00:15.379217   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:00:56.112316   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:02:57.125661   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:04:20.168298   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m5.081225325s)

                                                
                                                
-- stdout --
	* [test-preload-205820] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node test-preload-205820 in cluster test-preload-205820
	* Pulling base image ...
	* Downloading Kubernetes v1.24.6 preload ...
	* Updating the running docker "test-preload-205820" container ...
	* Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
	* Configuring CNI (Container Networking Interface) ...
	X Problems detected in kubelet:
	  Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893    4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	  Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	  Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038    4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:59:15.922988  124694 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:59:15.923190  124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:59:15.923199  124694 out.go:309] Setting ErrFile to fd 2...
	I0108 20:59:15.923206  124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:59:15.923344  124694 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 20:59:15.923946  124694 out.go:303] Setting JSON to false
	I0108 20:59:15.925106  124694 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2505,"bootTime":1673209051,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:59:15.925171  124694 start.go:135] virtualization: kvm guest
	I0108 20:59:15.927955  124694 out.go:177] * [test-preload-205820] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:59:15.929374  124694 notify.go:220] Checking for updates...
	I0108 20:59:15.929404  124694 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 20:59:15.931238  124694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:59:15.932840  124694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 20:59:15.935379  124694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 20:59:15.937020  124694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:59:15.939039  124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0108 20:59:15.941039  124694 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0108 20:59:15.942409  124694 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 20:59:15.970300  124694 docker.go:137] docker version: linux-20.10.22
	I0108 20:59:15.970401  124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:59:16.062763  124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:15.989379004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:59:16.062862  124694 docker.go:254] overlay module found
	I0108 20:59:16.065073  124694 out.go:177] * Using the docker driver based on existing profile
	I0108 20:59:16.066398  124694 start.go:294] selected driver: docker
	I0108 20:59:16.066409  124694 start.go:838] validating driver "docker" against &{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-205820 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:59:16.066519  124694 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:59:16.067271  124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:59:16.159790  124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:16.087078013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:59:16.160075  124694 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:59:16.160096  124694 cni.go:95] Creating CNI manager for ""
	I0108 20:59:16.160103  124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 20:59:16.160116  124694 start_flags.go:317] config:
	{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:59:16.162204  124694 out.go:177] * Starting control plane node test-preload-205820 in cluster test-preload-205820
	I0108 20:59:16.165845  124694 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 20:59:16.167544  124694 out.go:177] * Pulling base image ...
	I0108 20:59:16.169023  124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0108 20:59:16.169127  124694 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 20:59:16.191569  124694 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 20:59:16.191596  124694 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 20:59:16.488573  124694 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0108 20:59:16.488598  124694 cache.go:57] Caching tarball of preloaded images
	I0108 20:59:16.488917  124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0108 20:59:16.491216  124694 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I0108 20:59:16.492629  124694 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:59:17.039968  124694 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0108 20:59:32.016227  124694 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:59:32.016331  124694 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:59:32.888834  124694 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I0108 20:59:32.888992  124694 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/config.json ...
	I0108 20:59:32.889209  124694 cache.go:193] Successfully downloaded all kic artifacts
	I0108 20:59:32.889259  124694 start.go:364] acquiring machines lock for test-preload-205820: {Name:mk27a98eef575d3995d47e9b2c3065d636302b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:59:32.889363  124694 start.go:368] acquired machines lock for "test-preload-205820" in 75.02µs
	I0108 20:59:32.889385  124694 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:59:32.889395  124694 fix.go:55] fixHost starting: 
	I0108 20:59:32.889636  124694 cli_runner.go:164] Run: docker container inspect test-preload-205820 --format={{.State.Status}}
	I0108 20:59:32.913783  124694 fix.go:103] recreateIfNeeded on test-preload-205820: state=Running err=<nil>
	W0108 20:59:32.913829  124694 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 20:59:32.917800  124694 out.go:177] * Updating the running docker "test-preload-205820" container ...
	I0108 20:59:32.919462  124694 machine.go:88] provisioning docker machine ...
	I0108 20:59:32.919513  124694 ubuntu.go:169] provisioning hostname "test-preload-205820"
	I0108 20:59:32.919568  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:32.942125  124694 main.go:134] libmachine: Using SSH client type: native
	I0108 20:59:32.942374  124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0108 20:59:32.942400  124694 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-205820 && echo "test-preload-205820" | sudo tee /etc/hostname
	I0108 20:59:33.063328  124694 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-205820
	
	I0108 20:59:33.063392  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.086668  124694 main.go:134] libmachine: Using SSH client type: native
	I0108 20:59:33.086810  124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0108 20:59:33.086827  124694 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-205820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-205820/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-205820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:59:33.203200  124694 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:59:33.203231  124694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 20:59:33.203257  124694 ubuntu.go:177] setting up certificates
	I0108 20:59:33.203273  124694 provision.go:83] configureAuth start
	I0108 20:59:33.203326  124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
	I0108 20:59:33.226487  124694 provision.go:138] copyHostCerts
	I0108 20:59:33.226543  124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 20:59:33.226550  124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 20:59:33.226616  124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 20:59:33.226699  124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 20:59:33.226708  124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 20:59:33.226734  124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 20:59:33.226788  124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 20:59:33.226795  124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 20:59:33.226817  124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 20:59:33.226869  124694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.test-preload-205820 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-205820]
	I0108 20:59:33.438802  124694 provision.go:172] copyRemoteCerts
	I0108 20:59:33.438859  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:59:33.438889  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.462207  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.550321  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0108 20:59:33.566609  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:59:33.582624  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:59:33.598229  124694 provision.go:86] duration metric: configureAuth took 394.945613ms
	I0108 20:59:33.598253  124694 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:59:33.598410  124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I0108 20:59:33.598423  124694 machine.go:91] provisioned docker machine in 678.92515ms
	I0108 20:59:33.598432  124694 start.go:300] post-start starting for "test-preload-205820" (driver="docker")
	I0108 20:59:33.598441  124694 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:59:33.598485  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:59:33.598529  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.620869  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.706833  124694 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:59:33.709432  124694 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:59:33.709452  124694 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:59:33.709460  124694 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:59:33.709466  124694 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 20:59:33.709473  124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 20:59:33.709515  124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 20:59:33.709584  124694 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 20:59:33.709657  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:59:33.716065  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 20:59:33.732647  124694 start.go:303] post-start completed in 134.201143ms
	I0108 20:59:33.732700  124694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:59:33.732750  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.756085  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.835916  124694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:59:33.839883  124694 fix.go:57] fixHost completed within 950.482339ms
	I0108 20:59:33.839906  124694 start.go:83] releasing machines lock for "test-preload-205820", held for 950.52777ms
	I0108 20:59:33.839991  124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
	I0108 20:59:33.862646  124694 ssh_runner.go:195] Run: cat /version.json
	I0108 20:59:33.862692  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.862773  124694 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 20:59:33.862826  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.886491  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.886912  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.984937  124694 ssh_runner.go:195] Run: systemctl --version
	I0108 20:59:33.988836  124694 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:59:34.000114  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:59:34.008642  124694 docker.go:189] disabling docker service ...
	I0108 20:59:34.008693  124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:59:34.017530  124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:59:34.025801  124694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:59:34.122708  124694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:59:34.217961  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:59:34.226765  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:59:34.238797  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0108 20:59:34.246194  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 20:59:34.253558  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 20:59:34.261040  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 20:59:34.268683  124694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:59:34.274677  124694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:59:34.280603  124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:59:34.370755  124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:59:34.445671  124694 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:59:34.445735  124694 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:59:34.449843  124694 start.go:472] Will wait 60s for crictl version
	I0108 20:59:34.449900  124694 ssh_runner.go:195] Run: sudo crictl version
	I0108 20:59:34.476629  124694 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T20:59:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 20:59:45.523600  124694 ssh_runner.go:195] Run: sudo crictl version
	I0108 20:59:45.547086  124694 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 20:59:45.547154  124694 ssh_runner.go:195] Run: containerd --version
	I0108 20:59:45.569590  124694 ssh_runner.go:195] Run: containerd --version
	I0108 20:59:45.594001  124694 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
	I0108 20:59:45.595715  124694 cli_runner.go:164] Run: docker network inspect test-preload-205820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:59:45.617246  124694 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 20:59:45.620504  124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0108 20:59:45.620559  124694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:59:45.642354  124694 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I0108 20:59:45.642439  124694 ssh_runner.go:195] Run: which lz4
	I0108 20:59:45.645255  124694 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0108 20:59:45.648306  124694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0108 20:59:45.648333  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I0108 20:59:46.604476  124694 containerd.go:496] Took 0.959252 seconds to copy over tarball
	I0108 20:59:46.604556  124694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:59:49.388621  124694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.784042744s)
	I0108 20:59:49.388652  124694 containerd.go:503] Took 2.784153 seconds t extract the tarball
	I0108 20:59:49.388661  124694 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:59:49.410719  124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:59:49.511828  124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:59:49.595221  124694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:59:49.633196  124694 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 20:59:49.633289  124694 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:49.633307  124694 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:49.633331  124694 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:49.633356  124694 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:49.633443  124694 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0108 20:59:49.633489  124694 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:49.633318  124694 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:49.633821  124694 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:49.634498  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:49.634524  124694 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:49.634567  124694 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0108 20:59:49.634498  124694 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:49.634576  124694 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:49.634592  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:49.634597  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:49.634594  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:50.047554  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0108 20:59:50.082929  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I0108 20:59:50.099888  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0108 20:59:50.103323  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I0108 20:59:50.117424  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0108 20:59:50.146323  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I0108 20:59:50.152220  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I0108 20:59:50.398896  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:59:50.629706  124694 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0108 20:59:50.629756  124694 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0108 20:59:50.629794  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.816705  124694 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I0108 20:59:50.816826  124694 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:50.816908  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.834757  124694 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0108 20:59:50.834807  124694 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:50.834848  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.922638  124694 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I0108 20:59:50.922682  124694 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:50.922719  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.934129  124694 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0108 20:59:51.000970  124694 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:50.942667  124694 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I0108 20:59:51.001020  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.001040  124694 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:51.001068  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.015918  124694 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I0108 20:59:51.015958  124694 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:51.016003  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.052154  124694 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 20:59:51.052200  124694 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:51.052241  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.052242  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0108 20:59:51.052305  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:51.052367  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:51.052412  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:51.052474  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:51.052542  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:52.140730  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.088416372s)
	I0108 20:59:52.140757  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0108 20:59:52.140759  124694 ssh_runner.go:235] Completed: which crictl: (1.088481701s)
	I0108 20:59:52.140801  124694 ssh_runner.go:235] Completed: which crictl: (1.124782782s)
	I0108 20:59:52.140815  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:52.140840  124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0108 20:59:52.140885  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.088559722s)
	I0108 20:59:52.140843  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:52.140906  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I0108 20:59:52.140996  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6: (1.088560881s)
	I0108 20:59:52.141009  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.088624706s)
	I0108 20:59:52.141014  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I0108 20:59:52.141017  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0108 20:59:52.141071  124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0108 20:59:52.141105  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.088539031s)
	I0108 20:59:52.141119  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I0108 20:59:52.141068  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.088569381s)
	I0108 20:59:52.141133  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0108 20:59:52.141193  124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0108 20:59:52.235063  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 20:59:52.235158  124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0108 20:59:52.235188  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0108 20:59:52.235208  124694 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I0108 20:59:52.235211  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I0108 20:59:52.235244  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0108 20:59:52.235262  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0108 20:59:52.235301  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0108 20:59:52.348684  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0108 20:59:52.348714  124694 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0108 20:59:52.348759  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0108 20:59:52.348772  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0108 20:59:53.355117  124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.006333066s)
	I0108 20:59:53.355138  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0108 20:59:53.355161  124694 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0108 20:59:53.355197  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0108 20:59:58.744440  124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.389207325s)
	I0108 20:59:58.744469  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0108 20:59:58.744495  124694 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 20:59:58.744532  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0108 20:59:59.645452  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 20:59:59.645514  124694 cache_images.go:92] LoadImages completed in 10.012283055s
	W0108 20:59:59.645650  124694 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6: no such file or directory
	I0108 20:59:59.645712  124694 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:59:59.719369  124694 cni.go:95] Creating CNI manager for ""
	I0108 20:59:59.719404  124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 20:59:59.719417  124694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:59:59.719431  124694 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-205820 NodeName:test-preload-205820 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 20:59:59.719633  124694 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-205820"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:59:59.719739  124694 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-205820 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:59:59.719791  124694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I0108 20:59:59.726680  124694 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:59:59.726736  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:59:59.734052  124694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I0108 20:59:59.749257  124694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:59:59.764256  124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0108 20:59:59.823242  124694 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:59:59.826766  124694 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820 for IP: 192.168.67.2
	I0108 20:59:59.826880  124694 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 20:59:59.826936  124694 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 20:59:59.827034  124694 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key
	I0108 20:59:59.827114  124694 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key.c7fa3a9e
	I0108 20:59:59.827165  124694 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key
	I0108 20:59:59.827281  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 20:59:59.827327  124694 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 20:59:59.827342  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:59:59.827372  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:59:59.827409  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:59:59.827438  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 20:59:59.827512  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 20:59:59.828247  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:59:59.848605  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:59:59.867107  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:59:59.929393  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:59:59.947265  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:59:59.967659  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 20:59:59.986203  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:00:00.028839  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:00:00.054242  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:00:00.071784  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:00:00.087997  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:00:00.123064  124694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:00:00.135539  124694 ssh_runner.go:195] Run: openssl version
	I0108 21:00:00.140139  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:00:00.147247  124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:00:00.150148  124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:00:00.150197  124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:00:00.154652  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:00:00.161321  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:00:00.169127  124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:00:00.171911  124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:00:00.171967  124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:00:00.176639  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:00:00.182896  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:00:00.189696  124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:00:00.210855  124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:00:00.210904  124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:00:00.215636  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:00:00.222153  124694 kubeadm.go:396] StartCluster: {Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:00:00.222257  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:00:00.222298  124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:00:00.245669  124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
	I0108 21:00:00.245696  124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
	I0108 21:00:00.245706  124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
	I0108 21:00:00.245715  124694 cri.go:87] found id: ""
	I0108 21:00:00.245772  124694 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:00:00.277898  124694 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","pid":1612,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459/rootfs","created":"2023-01-08T20:58:44.075786098Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","pid":2685,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817/rootfs","created":"2023-01-08T20:59:11.277618302Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","pid":3743,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659/rootfs","created":"2023-01-08T20:59:53.536252787Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","pid":3679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb/rootfs","created":"2023-01-08T20:59:52.952963041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io
.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","pid":2625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3/rootfs","created":"2023-01-08T20:59:11.178050048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubern
etes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","pid":2211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae/rootfs","created":"2023-01-08T20:59:03.662818408Z","annotations":{"io.kubernetes.cri.container-type":"sandbo
x","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","pid":1658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c/rootfs","created":"2023-01-08T20:58:44.120902562Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io
.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071/rootfs","created":"2023-01-08T20:59:07.90993923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d220
4fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","pid":1657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd/rootfs","created":"2023-01-08T20:58:44.121187645Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","pid":2210,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a/rootfs","created":"2023-01-08T20:59:03.715705604Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersio
n":"1.0.2-dev","id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","pid":3646,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265/rootfs","created":"2023-01-08T20:59:52.914259586Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","pid":4073,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4/rootfs","created":"2023-01-08T20:59:59.961439321Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","pid":1522,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c/rootfs","created":"2023-01-08T20:58:43.912562088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","pid":1611,"status":"running","bundle":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6/rootfs","created":"2023-01-08T20:58:44.078720095Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","pid":3579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111/rootfs","created":"2023-01-08T20:59:52.820275074Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","pid":3470,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","rootfs":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855/rootfs","created":"2023-01-08T20:59:52.622948749Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","pid":3442,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea7454
6fe19d4e0496d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d/rootfs","created":"2023-01-08T20:59:52.55532244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","pid":1520,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad09153258
04e13945554f39501c29ac7dcf5ab81f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3/rootfs","created":"2023-01-08T20:58:43.914531641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9
b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462/rootfs","created":"2023-01-08T20:59:03.781592888Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","pid":1521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65/rootfs","cre
ated":"2023-01-08T20:58:43.918296824Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d4403
53ac76bf/rootfs","created":"2023-01-08T20:59:11.177965157Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","pid":2686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd1586384
8/rootfs","created":"2023-01-08T20:59:11.277494639Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","pid":1523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f/rootfs","created":"2023-01-08T20:58:43.918339088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox
-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","pid":3427,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67/rootfs","created":"2023-01-08T20:59:52.545724953Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-peri
od":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","pid":3658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0/rootfs","created":"2023-01-08T20:59:52.920247257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.san
dbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","pid":3534,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb/rootfs","created":"2023-01-08T20:59:52.73552926Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-perio
d":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I0108 21:00:00.278314  124694 cri.go:124] list returned 26 containers
	I0108 21:00:00.278332  124694 cri.go:127] container: {ID:065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 Status:running}
	I0108 21:00:00.278347  124694 cri.go:129] skipping 065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 - not in ps
	I0108 21:00:00.278355  124694 cri.go:127] container: {ID:08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 Status:running}
	I0108 21:00:00.278368  124694 cri.go:129] skipping 08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 - not in ps
	I0108 21:00:00.278384  124694 cri.go:127] container: {ID:0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 Status:running}
	I0108 21:00:00.278397  124694 cri.go:133] skipping {0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 running}: state = "running", want "paused"
	I0108 21:00:00.278410  124694 cri.go:127] container: {ID:10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb Status:running}
	I0108 21:00:00.278422  124694 cri.go:129] skipping 10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb - not in ps
	I0108 21:00:00.278433  124694 cri.go:127] container: {ID:12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 Status:running}
	I0108 21:00:00.278442  124694 cri.go:129] skipping 12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 - not in ps
	I0108 21:00:00.278451  124694 cri.go:127] container: {ID:149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae Status:running}
	I0108 21:00:00.278461  124694 cri.go:129] skipping 149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae - not in ps
	I0108 21:00:00.278471  124694 cri.go:127] container: {ID:2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c Status:running}
	I0108 21:00:00.278482  124694 cri.go:129] skipping 2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c - not in ps
	I0108 21:00:00.278493  124694 cri.go:127] container: {ID:2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 Status:running}
	I0108 21:00:00.278502  124694 cri.go:129] skipping 2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 - not in ps
	I0108 21:00:00.278512  124694 cri.go:127] container: {ID:40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd Status:running}
	I0108 21:00:00.278525  124694 cri.go:129] skipping 40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd - not in ps
	I0108 21:00:00.278536  124694 cri.go:127] container: {ID:414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a Status:running}
	I0108 21:00:00.278547  124694 cri.go:129] skipping 414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a - not in ps
	I0108 21:00:00.278554  124694 cri.go:127] container: {ID:41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 Status:running}
	I0108 21:00:00.278566  124694 cri.go:129] skipping 41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 - not in ps
	I0108 21:00:00.278576  124694 cri.go:127] container: {ID:43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 Status:running}
	I0108 21:00:00.278588  124694 cri.go:133] skipping {43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 running}: state = "running", want "paused"
	I0108 21:00:00.278603  124694 cri.go:127] container: {ID:5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c Status:running}
	I0108 21:00:00.278615  124694 cri.go:129] skipping 5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c - not in ps
	I0108 21:00:00.278633  124694 cri.go:127] container: {ID:67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 Status:running}
	I0108 21:00:00.278644  124694 cri.go:129] skipping 67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 - not in ps
	I0108 21:00:00.278651  124694 cri.go:127] container: {ID:7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 Status:running}
	I0108 21:00:00.278660  124694 cri.go:129] skipping 7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 - not in ps
	I0108 21:00:00.278667  124694 cri.go:127] container: {ID:73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 Status:running}
	I0108 21:00:00.278679  124694 cri.go:129] skipping 73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 - not in ps
	I0108 21:00:00.278687  124694 cri.go:127] container: {ID:833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d Status:running}
	I0108 21:00:00.278699  124694 cri.go:129] skipping 833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d - not in ps
	I0108 21:00:00.278707  124694 cri.go:127] container: {ID:a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 Status:running}
	I0108 21:00:00.278719  124694 cri.go:129] skipping a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 - not in ps
	I0108 21:00:00.278729  124694 cri.go:127] container: {ID:c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 Status:running}
	I0108 21:00:00.278737  124694 cri.go:129] skipping c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 - not in ps
	I0108 21:00:00.278744  124694 cri.go:127] container: {ID:c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 Status:running}
	I0108 21:00:00.278756  124694 cri.go:129] skipping c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 - not in ps
	I0108 21:00:00.278767  124694 cri.go:127] container: {ID:c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf Status:running}
	I0108 21:00:00.278780  124694 cri.go:129] skipping c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf - not in ps
	I0108 21:00:00.278790  124694 cri.go:127] container: {ID:c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 Status:running}
	I0108 21:00:00.278804  124694 cri.go:129] skipping c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 - not in ps
	I0108 21:00:00.278814  124694 cri.go:127] container: {ID:d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f Status:running}
	I0108 21:00:00.278822  124694 cri.go:129] skipping d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f - not in ps
	I0108 21:00:00.278830  124694 cri.go:127] container: {ID:ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 Status:running}
	I0108 21:00:00.278842  124694 cri.go:129] skipping ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 - not in ps
	I0108 21:00:00.278852  124694 cri.go:127] container: {ID:ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 Status:running}
	I0108 21:00:00.278862  124694 cri.go:129] skipping ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 - not in ps
	I0108 21:00:00.278872  124694 cri.go:127] container: {ID:ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb Status:running}
	I0108 21:00:00.278883  124694 cri.go:129] skipping ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb - not in ps
	I0108 21:00:00.278925  124694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:00:00.286080  124694 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:00:00.286102  124694 kubeadm.go:627] restartCluster start
	I0108 21:00:00.286141  124694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:00:00.292256  124694 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:00:00.292769  124694 kubeconfig.go:92] found "test-preload-205820" server: "https://192.168.67.2:8443"
	I0108 21:00:00.293379  124694 kapi.go:59] client config for test-preload-205820: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:00:00.293896  124694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:00:00.302755  124694 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-01-08 20:58:39.826861611 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-01-08 20:59:59.816713998 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0108 21:00:00.302770  124694 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:00:00.302789  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:00:00.302824  124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:00:00.329264  124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
	I0108 21:00:00.329296  124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
	I0108 21:00:00.329308  124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
	I0108 21:00:00.329317  124694 cri.go:87] found id: ""
	I0108 21:00:00.329323  124694 cri.go:232] Stopping containers: [43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659]
	I0108 21:00:00.329366  124694 ssh_runner.go:195] Run: which crictl
	I0108 21:00:00.332622  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659
	I0108 21:00:00.624345  124694 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:00:00.699226  124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:00:00.706356  124694 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Jan  8 20:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 20:58 /etc/kubernetes/scheduler.conf
	
	I0108 21:00:00.706408  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:00:00.713037  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:00:00.719542  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:00:00.725937  124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:00:00.725991  124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:00:00.731944  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:00:00.738208  124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:00:00.738259  124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:00:00.744328  124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:00:00.750786  124694 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:00:00.750804  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:00.994143  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:01.861835  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:02.144772  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:02.193739  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:02.312980  124694 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:00:02.313046  124694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:00:02.324151  124694 api_server.go:71] duration metric: took 11.177196ms to wait for apiserver process to appear ...
	I0108 21:00:02.324188  124694 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:00:02.324232  124694 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0108 21:00:02.329308  124694 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0108 21:00:02.336848  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:02.336885  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:02.838027  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:02.838054  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:03.338861  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:03.338897  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:03.837783  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:03.837811  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:04.338312  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:04.338339  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W0108 21:00:04.837852  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0108 21:00:05.337803  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0108 21:00:05.837782  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0108 21:00:06.338026  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I0108 21:00:09.935143  124694 api_server.go:140] control plane version: v1.24.6
	I0108 21:00:09.935175  124694 api_server.go:130] duration metric: took 7.610979606s to wait for apiserver health ...
	I0108 21:00:09.935185  124694 cni.go:95] Creating CNI manager for ""
	I0108 21:00:09.935193  124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:00:09.937716  124694 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:00:09.939281  124694 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:00:10.021100  124694 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I0108 21:00:10.021132  124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:00:10.133101  124694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:00:11.267907  124694 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.134775053s)
	I0108 21:00:11.267939  124694 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:00:11.274594  124694 system_pods.go:59] 6 kube-system pods found
	I0108 21:00:11.274625  124694 system_pods.go:61] "coredns-6d4b75cb6d-48vmf" [d43c5f88-44b8-4ab6-bc5b-f2883eda56e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:00:11.274637  124694 system_pods.go:61] "etcd-test-preload-205820" [f39e5236-110c-4587-8d2c-7da2d7802adc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:00:11.274644  124694 system_pods.go:61] "kindnet-mtvg5" [1257f157-44a7-41fe-9d98-48b85ce53a40] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:00:11.274653  124694 system_pods.go:61] "kube-proxy-wmrz2" [35e9935b-759b-4c18-9d0b-2c0daaab9a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:00:11.274659  124694 system_pods.go:61] "kube-scheduler-test-preload-205820" [e0e1f824-50ae-4a61-b2c6-d7d2bb6f2edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:00:11.274664  124694 system_pods.go:61] "storage-provisioner" [bdbd16cd-b53b-4309-ad17-7915a6d7b693] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 21:00:11.274669  124694 system_pods.go:74] duration metric: took 6.724913ms to wait for pod list to return data ...
	I0108 21:00:11.274676  124694 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:00:11.276970  124694 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:00:11.276995  124694 node_conditions.go:123] node cpu capacity is 8
	I0108 21:00:11.277010  124694 node_conditions.go:105] duration metric: took 2.328282ms to run NodePressure ...
	I0108 21:00:11.277035  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:11.436079  124694 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:00:11.439304  124694 kubeadm.go:778] kubelet initialised
	I0108 21:00:11.439324  124694 kubeadm.go:779] duration metric: took 3.225451ms waiting for restarted kubelet to initialise ...
	I0108 21:00:11.439330  124694 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:00:11.443291  124694 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
	I0108 21:00:13.452847  124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:15.453183  124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:17.953269  124694 pod_ready.go:92] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"True"
	I0108 21:00:17.953294  124694 pod_ready.go:81] duration metric: took 6.509981854s waiting for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
	I0108 21:00:17.953304  124694 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
	I0108 21:00:19.962548  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:21.963216  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:23.963314  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:26.462627  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:28.462965  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:30.962959  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:32.963068  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:35.463009  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:37.962454  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:40.462881  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:42.963385  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:45.462486  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:47.962468  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:49.962746  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:51.963178  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:54.463217  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:56.963323  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:59.463092  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:01.963156  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:04.463567  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:06.464930  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:08.962935  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:11.463300  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:13.962969  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:16.463128  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:18.963199  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:20.963826  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:23.462743  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:25.463158  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:27.962188  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:29.963079  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:32.464217  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:34.962854  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:37.462215  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:39.462584  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:41.462699  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:43.462915  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:45.963307  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:48.463544  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:50.963045  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:52.963170  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:55.462700  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:57.463256  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:59.962706  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:01.962779  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:03.963173  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:06.463371  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:08.463437  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:10.465071  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:12.963206  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:15.462589  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:17.462845  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:19.962938  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:21.963353  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:24.463222  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:26.463680  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:28.962594  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:30.962697  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:32.963185  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:35.462477  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:37.463216  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:39.962881  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:42.462539  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:44.462864  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:46.462968  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:48.962577  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:50.962760  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:53.464211  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:55.963075  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:58.463348  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:00.962702  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:02.962942  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:04.963134  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:07.462937  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:09.962917  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:12.462863  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:14.962823  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:17.462424  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:19.462845  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:21.962750  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:24.462946  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:26.463390  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:28.962923  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:30.963325  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:33.462969  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:35.963094  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:38.462979  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:40.963186  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:43.462328  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:45.462741  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:47.962483  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:49.963279  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:51.963334  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:54.462958  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:56.963433  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:58.963562  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:00.963753  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:03.463621  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:05.962769  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:07.962891  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:09.963338  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:12.462686  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:14.463369  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:16.963058  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:17.957364  124694 pod_ready.go:81] duration metric: took 4m0.004045666s waiting for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
	E0108 21:04:17.957391  124694 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:04:17.957419  124694 pod_ready.go:38] duration metric: took 4m6.518080998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:04:17.957445  124694 kubeadm.go:631] restartCluster took 4m17.671337074s
	W0108 21:04:17.957589  124694 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:04:17.957621  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:04:19.626459  124694 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.668819722s)
	I0108 21:04:19.626516  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:04:19.635943  124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:04:19.642808  124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:04:19.642862  124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:04:19.649319  124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:04:19.649357  124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:04:19.686509  124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0108 21:04:19.686580  124694 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:04:19.714334  124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:04:19.714410  124694 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:04:19.714442  124694 kubeadm.go:317] OS: Linux
	I0108 21:04:19.714480  124694 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:04:19.714520  124694 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:04:19.714613  124694 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:04:19.714688  124694 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:04:19.714729  124694 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:04:19.714777  124694 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:04:19.714821  124694 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:04:19.714864  124694 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:04:19.714905  124694 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:04:19.795815  124694 kubeadm.go:317] W0108 21:04:19.681686    6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:04:19.796049  124694 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:04:19.796184  124694 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:04:19.796272  124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0108 21:04:19.796332  124694 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0108 21:04:19.796381  124694 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0108 21:04:19.796489  124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0108 21:04:19.796595  124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0108 21:04:19.796778  124694 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:19.681686    6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:19.681686    6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 21:04:19.796820  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:04:20.125925  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:04:20.135276  124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:04:20.135332  124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:04:20.142002  124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:04:20.142045  124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:04:20.178099  124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0108 21:04:20.178220  124694 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:04:20.203461  124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:04:20.203557  124694 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:04:20.203613  124694 kubeadm.go:317] OS: Linux
	I0108 21:04:20.203661  124694 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:04:20.203724  124694 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:04:20.203781  124694 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:04:20.203869  124694 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:04:20.203928  124694 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:04:20.203973  124694 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:04:20.204056  124694 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:04:20.204123  124694 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:04:20.204198  124694 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:04:20.268181  124694 kubeadm.go:317] W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:04:20.268365  124694 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:04:20.268449  124694 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:04:20.268528  124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0108 21:04:20.268566  124694 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0108 21:04:20.268640  124694 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0108 21:04:20.268767  124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0108 21:04:20.268860  124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 21:04:20.268932  124694 kubeadm.go:398] StartCluster complete in 4m20.046785929s
	I0108 21:04:20.268974  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:04:20.269027  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:04:20.291757  124694 cri.go:87] found id: ""
	I0108 21:04:20.291784  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.291794  124694 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:04:20.291800  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:04:20.291843  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:04:20.314092  124694 cri.go:87] found id: ""
	I0108 21:04:20.314115  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.314121  124694 logs.go:276] No container was found matching "etcd"
	I0108 21:04:20.314127  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:04:20.314165  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:04:20.336438  124694 cri.go:87] found id: ""
	I0108 21:04:20.336466  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.336476  124694 logs.go:276] No container was found matching "coredns"
	I0108 21:04:20.336485  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:04:20.336531  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:04:20.360386  124694 cri.go:87] found id: ""
	I0108 21:04:20.360419  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.360428  124694 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:04:20.360436  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:04:20.360477  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:04:20.384216  124694 cri.go:87] found id: ""
	I0108 21:04:20.384244  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.384251  124694 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:04:20.384259  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:04:20.384307  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:04:20.407359  124694 cri.go:87] found id: ""
	I0108 21:04:20.407385  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.407391  124694 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:04:20.407397  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:04:20.407446  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:04:20.429513  124694 cri.go:87] found id: ""
	I0108 21:04:20.429538  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.429547  124694 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:04:20.429554  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:04:20.429592  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:04:20.452750  124694 cri.go:87] found id: ""
	I0108 21:04:20.452771  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.452777  124694 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:04:20.452786  124694 logs.go:123] Gathering logs for kubelet ...
	I0108 21:04:20.452797  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:04:20.510605  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893    4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511028  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511172  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038    4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511334  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938056    4359 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511496  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938110    4359 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511664  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938117    4359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511857  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938151    4359 projected.go:192] Error preparing data for projected volume kube-api-access-wvwgn for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.512266  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938177    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn podName:bdbd16cd-b53b-4309-ad17-7915a6d7b693 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938168618 +0000 UTC m=+8.792977602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wvwgn" (UniqueName: "kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn") pod "storage-provisioner" (UID: "bdbd16cd-b53b-4309-ad17-7915a6d7b693") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.512442  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938217    4359 projected.go:192] Error preparing data for projected volume kube-api-access-s5nz9 for pod kube-system/kindnet-mtvg5: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.512847  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938249    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9 podName:1257f157-44a7-41fe-9d98-48b85ce53a40 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938238341 +0000 UTC m=+8.793047329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s5nz9" (UniqueName: "kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9") pod "kindnet-mtvg5" (UID: "1257f157-44a7-41fe-9d98-48b85ce53a40") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513031  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938309    4359 projected.go:192] Error preparing data for projected volume kube-api-access-9t8jr for pod kube-system/coredns-6d4b75cb6d-48vmf: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513475  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938332    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr podName:d43c5f88-44b8-4ab6-bc5b-f2883eda56e2 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938325487 +0000 UTC m=+8.793134472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9t8jr" (UniqueName: "kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr") pod "coredns-6d4b75cb6d-48vmf" (UID: "d43c5f88-44b8-4ab6-bc5b-f2883eda56e2") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513628  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938363    4359 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513802  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938372    4359 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.534040  124694 logs.go:123] Gathering logs for dmesg ...
	I0108 21:04:20.534063  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:04:20.547468  124694 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:04:20.547515  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:04:20.836897  124694 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:04:20.836920  124694 logs.go:123] Gathering logs for containerd ...
	I0108 21:04:20.836933  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:04:20.891961  124694 logs.go:123] Gathering logs for container status ...
	I0108 21:04:20.891999  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 21:04:20.917568  124694 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0108 21:04:20.917600  124694 out.go:239] * 
	* 
	W0108 21:04:20.917764  124694 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:04:20.917788  124694 out.go:239] * 
	* 
	W0108 21:04:20.918668  124694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:04:20.921286  124694 out.go:177] X Problems detected in kubelet:
	I0108 21:04:20.922717  124694 out.go:177]   Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893    4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.925364  124694 out.go:177]   Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.926971  124694 out.go:177]   Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038    4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.929431  124694 out.go:177] 
	W0108 21:04:20.930937  124694 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:04:20.931018  124694 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0108 21:04:20.931068  124694 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0108 21:04:20.932735  124694 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2023-01-08 21:04:20.977007524 +0000 UTC m=+2223.120179816
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-205820
helpers_test.go:235: (dbg) docker inspect test-preload-205820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce",
	        "Created": "2023-01-08T20:58:21.480695226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 121415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T20:58:22.055601402Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/hosts",
	        "LogPath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce-json.log",
	        "Name": "/test-preload-205820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-205820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-205820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-205820",
	                "Source": "/var/lib/docker/volumes/test-preload-205820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-205820",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-205820",
	                "name.minikube.sigs.k8s.io": "test-preload-205820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "746f7405522a8c28f5e57d7a7fda75b53b92a8763f8f128a0e9615d82bed0a8b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/746f7405522a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-205820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "614931b1d191",
	                        "test-preload-205820"
	                    ],
	                    "NetworkID": "6987da7ab2da74011fe53784d265ab03133276db3c92cc5963e6695e7e04136b",
	                    "EndpointID": "4468b674e4ce5b5362e41cac0caccb8c7bd01864fa22309b582182119ff6357a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-205820 -n test-preload-205820
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-205820 -n test-preload-205820: exit status 2 (344.455697ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-205820 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-205018 ssh -n                                                                 | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | multinode-205018-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt                       | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | multinode-205018:/home/docker/cp-test_multinode-205018-m03_multinode-205018.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-205018 ssh -n                                                                 | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | multinode-205018-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-205018 ssh -n multinode-205018 sudo cat                                       | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | /home/docker/cp-test_multinode-205018-m03_multinode-205018.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt                       | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | multinode-205018-m02:/home/docker/cp-test_multinode-205018-m03_multinode-205018-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-205018 ssh -n                                                                 | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | multinode-205018-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-205018 ssh -n multinode-205018-m02 sudo cat                                   | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	|         | /home/docker/cp-test_multinode-205018-m03_multinode-205018-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-205018 node stop m03                                                          | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
	| node    | multinode-205018 node start                                                             | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:53 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-205018                                                                | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:53 UTC |                     |
	| stop    | -p multinode-205018                                                                     | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:53 UTC | 08 Jan 23 20:53 UTC |
	| start   | -p multinode-205018                                                                     | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:53 UTC | 08 Jan 23 20:55 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-205018                                                                | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:55 UTC |                     |
	| node    | multinode-205018 node delete                                                            | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:55 UTC | 08 Jan 23 20:55 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-205018 stop                                                                   | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:55 UTC | 08 Jan 23 20:56 UTC |
	| start   | -p multinode-205018                                                                     | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:56 UTC | 08 Jan 23 20:57 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-205018                                                                | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:57 UTC |                     |
	| start   | -p multinode-205018-m02                                                                 | multinode-205018-m02 | jenkins | v1.28.0 | 08 Jan 23 20:57 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-205018-m03                                                                 | multinode-205018-m03 | jenkins | v1.28.0 | 08 Jan 23 20:57 UTC | 08 Jan 23 20:58 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-205018                                                                 | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC |                     |
	| delete  | -p multinode-205018-m03                                                                 | multinode-205018-m03 | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | 08 Jan 23 20:58 UTC |
	| delete  | -p multinode-205018                                                                     | multinode-205018     | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | 08 Jan 23 20:58 UTC |
	| start   | -p test-preload-205820                                                                  | test-preload-205820  | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | 08 Jan 23 20:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-205820                                                                  | test-preload-205820  | jenkins | v1.28.0 | 08 Jan 23 20:59 UTC | 08 Jan 23 20:59 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-205820                                                                  | test-preload-205820  | jenkins | v1.28.0 | 08 Jan 23 20:59 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 20:59:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:59:15.922988  124694 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:59:15.923190  124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:59:15.923199  124694 out.go:309] Setting ErrFile to fd 2...
	I0108 20:59:15.923206  124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:59:15.923344  124694 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 20:59:15.923946  124694 out.go:303] Setting JSON to false
	I0108 20:59:15.925106  124694 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2505,"bootTime":1673209051,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:59:15.925171  124694 start.go:135] virtualization: kvm guest
	I0108 20:59:15.927955  124694 out.go:177] * [test-preload-205820] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:59:15.929374  124694 notify.go:220] Checking for updates...
	I0108 20:59:15.929404  124694 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 20:59:15.931238  124694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:59:15.932840  124694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 20:59:15.935379  124694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 20:59:15.937020  124694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:59:15.939039  124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0108 20:59:15.941039  124694 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0108 20:59:15.942409  124694 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 20:59:15.970300  124694 docker.go:137] docker version: linux-20.10.22
	I0108 20:59:15.970401  124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:59:16.062763  124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:15.989379004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:59:16.062862  124694 docker.go:254] overlay module found
	I0108 20:59:16.065073  124694 out.go:177] * Using the docker driver based on existing profile
	I0108 20:59:16.066398  124694 start.go:294] selected driver: docker
	I0108 20:59:16.066409  124694 start.go:838] validating driver "docker" against &{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-205820 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:59:16.066519  124694 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:59:16.067271  124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:59:16.159790  124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:16.087078013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:59:16.160075  124694 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:59:16.160096  124694 cni.go:95] Creating CNI manager for ""
	I0108 20:59:16.160103  124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 20:59:16.160116  124694 start_flags.go:317] config:
	{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:59:16.162204  124694 out.go:177] * Starting control plane node test-preload-205820 in cluster test-preload-205820
	I0108 20:59:16.165845  124694 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 20:59:16.167544  124694 out.go:177] * Pulling base image ...
	I0108 20:59:16.169023  124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0108 20:59:16.169127  124694 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 20:59:16.191569  124694 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 20:59:16.191596  124694 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 20:59:16.488573  124694 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0108 20:59:16.488598  124694 cache.go:57] Caching tarball of preloaded images
	I0108 20:59:16.488917  124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0108 20:59:16.491216  124694 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I0108 20:59:16.492629  124694 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:59:17.039968  124694 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0108 20:59:32.016227  124694 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:59:32.016331  124694 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:59:32.888834  124694 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I0108 20:59:32.888992  124694 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/config.json ...
	I0108 20:59:32.889209  124694 cache.go:193] Successfully downloaded all kic artifacts
	I0108 20:59:32.889259  124694 start.go:364] acquiring machines lock for test-preload-205820: {Name:mk27a98eef575d3995d47e9b2c3065d636302b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:59:32.889363  124694 start.go:368] acquired machines lock for "test-preload-205820" in 75.02µs
	I0108 20:59:32.889385  124694 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:59:32.889395  124694 fix.go:55] fixHost starting: 
	I0108 20:59:32.889636  124694 cli_runner.go:164] Run: docker container inspect test-preload-205820 --format={{.State.Status}}
	I0108 20:59:32.913783  124694 fix.go:103] recreateIfNeeded on test-preload-205820: state=Running err=<nil>
	W0108 20:59:32.913829  124694 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 20:59:32.917800  124694 out.go:177] * Updating the running docker "test-preload-205820" container ...
	I0108 20:59:32.919462  124694 machine.go:88] provisioning docker machine ...
	I0108 20:59:32.919513  124694 ubuntu.go:169] provisioning hostname "test-preload-205820"
	I0108 20:59:32.919568  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:32.942125  124694 main.go:134] libmachine: Using SSH client type: native
	I0108 20:59:32.942374  124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0108 20:59:32.942400  124694 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-205820 && echo "test-preload-205820" | sudo tee /etc/hostname
	I0108 20:59:33.063328  124694 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-205820
	
	I0108 20:59:33.063392  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.086668  124694 main.go:134] libmachine: Using SSH client type: native
	I0108 20:59:33.086810  124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32892 <nil> <nil>}
	I0108 20:59:33.086827  124694 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-205820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-205820/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-205820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:59:33.203200  124694 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:59:33.203231  124694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 20:59:33.203257  124694 ubuntu.go:177] setting up certificates
	I0108 20:59:33.203273  124694 provision.go:83] configureAuth start
	I0108 20:59:33.203326  124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
	I0108 20:59:33.226487  124694 provision.go:138] copyHostCerts
	I0108 20:59:33.226543  124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 20:59:33.226550  124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 20:59:33.226616  124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 20:59:33.226699  124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 20:59:33.226708  124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 20:59:33.226734  124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 20:59:33.226788  124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 20:59:33.226795  124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 20:59:33.226817  124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 20:59:33.226869  124694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.test-preload-205820 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-205820]
	I0108 20:59:33.438802  124694 provision.go:172] copyRemoteCerts
	I0108 20:59:33.438859  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:59:33.438889  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.462207  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.550321  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0108 20:59:33.566609  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:59:33.582624  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:59:33.598229  124694 provision.go:86] duration metric: configureAuth took 394.945613ms
	I0108 20:59:33.598253  124694 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:59:33.598410  124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I0108 20:59:33.598423  124694 machine.go:91] provisioned docker machine in 678.92515ms
	I0108 20:59:33.598432  124694 start.go:300] post-start starting for "test-preload-205820" (driver="docker")
	I0108 20:59:33.598441  124694 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:59:33.598485  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:59:33.598529  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.620869  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.706833  124694 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:59:33.709432  124694 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:59:33.709452  124694 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:59:33.709460  124694 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:59:33.709466  124694 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 20:59:33.709473  124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 20:59:33.709515  124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 20:59:33.709584  124694 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 20:59:33.709657  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:59:33.716065  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 20:59:33.732647  124694 start.go:303] post-start completed in 134.201143ms
	I0108 20:59:33.732700  124694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:59:33.732750  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.756085  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.835916  124694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:59:33.839883  124694 fix.go:57] fixHost completed within 950.482339ms
	I0108 20:59:33.839906  124694 start.go:83] releasing machines lock for "test-preload-205820", held for 950.52777ms
	I0108 20:59:33.839991  124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
	I0108 20:59:33.862646  124694 ssh_runner.go:195] Run: cat /version.json
	I0108 20:59:33.862692  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.862773  124694 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 20:59:33.862826  124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
	I0108 20:59:33.886491  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.886912  124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
	I0108 20:59:33.984937  124694 ssh_runner.go:195] Run: systemctl --version
	I0108 20:59:33.988836  124694 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 20:59:34.000114  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 20:59:34.008642  124694 docker.go:189] disabling docker service ...
	I0108 20:59:34.008693  124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:59:34.017530  124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:59:34.025801  124694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:59:34.122708  124694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:59:34.217961  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:59:34.226765  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:59:34.238797  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0108 20:59:34.246194  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 20:59:34.253558  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 20:59:34.261040  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 20:59:34.268683  124694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:59:34.274677  124694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:59:34.280603  124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:59:34.370755  124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:59:34.445671  124694 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 20:59:34.445735  124694 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 20:59:34.449843  124694 start.go:472] Will wait 60s for crictl version
	I0108 20:59:34.449900  124694 ssh_runner.go:195] Run: sudo crictl version
	I0108 20:59:34.476629  124694 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T20:59:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 20:59:45.523600  124694 ssh_runner.go:195] Run: sudo crictl version
	I0108 20:59:45.547086  124694 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 20:59:45.547154  124694 ssh_runner.go:195] Run: containerd --version
	I0108 20:59:45.569590  124694 ssh_runner.go:195] Run: containerd --version
	I0108 20:59:45.594001  124694 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
	I0108 20:59:45.595715  124694 cli_runner.go:164] Run: docker network inspect test-preload-205820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:59:45.617246  124694 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 20:59:45.620504  124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0108 20:59:45.620559  124694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:59:45.642354  124694 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I0108 20:59:45.642439  124694 ssh_runner.go:195] Run: which lz4
	I0108 20:59:45.645255  124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:59:45.648306  124694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0108 20:59:45.648333  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I0108 20:59:46.604476  124694 containerd.go:496] Took 0.959252 seconds to copy over tarball
	I0108 20:59:46.604556  124694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:59:49.388621  124694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.784042744s)
	I0108 20:59:49.388652  124694 containerd.go:503] Took 2.784153 seconds t extract the tarball
	I0108 20:59:49.388661  124694 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:59:49.410719  124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:59:49.511828  124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 20:59:49.595221  124694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:59:49.633196  124694 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 20:59:49.633289  124694 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:49.633307  124694 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:49.633331  124694 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:49.633356  124694 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:49.633443  124694 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0108 20:59:49.633489  124694 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:49.633318  124694 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:49.633821  124694 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:49.634498  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:49.634524  124694 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:49.634567  124694 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0108 20:59:49.634498  124694 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:49.634576  124694 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:49.634592  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:49.634597  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:49.634594  124694 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:50.047554  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0108 20:59:50.082929  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I0108 20:59:50.099888  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0108 20:59:50.103323  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I0108 20:59:50.117424  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0108 20:59:50.146323  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I0108 20:59:50.152220  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I0108 20:59:50.398896  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:59:50.629706  124694 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0108 20:59:50.629756  124694 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0108 20:59:50.629794  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.816705  124694 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I0108 20:59:50.816826  124694 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:50.816908  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.834757  124694 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0108 20:59:50.834807  124694 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:50.834848  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.922638  124694 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I0108 20:59:50.922682  124694 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:50.922719  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:50.934129  124694 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0108 20:59:51.000970  124694 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:50.942667  124694 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I0108 20:59:51.001020  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.001040  124694 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:51.001068  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.015918  124694 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I0108 20:59:51.015958  124694 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:51.016003  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.052154  124694 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 20:59:51.052200  124694 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:51.052241  124694 ssh_runner.go:195] Run: which crictl
	I0108 20:59:51.052242  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0108 20:59:51.052305  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I0108 20:59:51.052367  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0108 20:59:51.052412  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I0108 20:59:51.052474  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0108 20:59:51.052542  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I0108 20:59:52.140730  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.088416372s)
	I0108 20:59:52.140757  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0108 20:59:52.140759  124694 ssh_runner.go:235] Completed: which crictl: (1.088481701s)
	I0108 20:59:52.140801  124694 ssh_runner.go:235] Completed: which crictl: (1.124782782s)
	I0108 20:59:52.140815  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:59:52.140840  124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0108 20:59:52.140885  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.088559722s)
	I0108 20:59:52.140843  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I0108 20:59:52.140906  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I0108 20:59:52.140996  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6: (1.088560881s)
	I0108 20:59:52.141009  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.088624706s)
	I0108 20:59:52.141014  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I0108 20:59:52.141017  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0108 20:59:52.141071  124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0108 20:59:52.141105  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.088539031s)
	I0108 20:59:52.141119  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I0108 20:59:52.141068  124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.088569381s)
	I0108 20:59:52.141133  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0108 20:59:52.141193  124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0108 20:59:52.235063  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 20:59:52.235158  124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0108 20:59:52.235188  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0108 20:59:52.235208  124694 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I0108 20:59:52.235211  124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I0108 20:59:52.235244  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0108 20:59:52.235262  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0108 20:59:52.235301  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0108 20:59:52.348684  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0108 20:59:52.348714  124694 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0108 20:59:52.348759  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0108 20:59:52.348772  124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0108 20:59:53.355117  124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.006333066s)
	I0108 20:59:53.355138  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0108 20:59:53.355161  124694 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0108 20:59:53.355197  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0108 20:59:58.744440  124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.389207325s)
	I0108 20:59:58.744469  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0108 20:59:58.744495  124694 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 20:59:58.744532  124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0108 20:59:59.645452  124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 20:59:59.645514  124694 cache_images.go:92] LoadImages completed in 10.012283055s
	W0108 20:59:59.645650  124694 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6: no such file or directory
	I0108 20:59:59.645712  124694 ssh_runner.go:195] Run: sudo crictl info
	I0108 20:59:59.719369  124694 cni.go:95] Creating CNI manager for ""
	I0108 20:59:59.719404  124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 20:59:59.719417  124694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:59:59.719431  124694 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-205820 NodeName:test-preload-205820 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 20:59:59.719633  124694 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-205820"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:59:59.719739  124694 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-205820 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:59:59.719791  124694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I0108 20:59:59.726680  124694 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:59:59.726736  124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:59:59.734052  124694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I0108 20:59:59.749257  124694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:59:59.764256  124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0108 20:59:59.823242  124694 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:59:59.826766  124694 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820 for IP: 192.168.67.2
	I0108 20:59:59.826880  124694 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 20:59:59.826936  124694 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 20:59:59.827034  124694 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key
	I0108 20:59:59.827114  124694 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key.c7fa3a9e
	I0108 20:59:59.827165  124694 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key
	I0108 20:59:59.827281  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 20:59:59.827327  124694 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 20:59:59.827342  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:59:59.827372  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:59:59.827409  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:59:59.827438  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 20:59:59.827512  124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 20:59:59.828247  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:59:59.848605  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:59:59.867107  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:59:59.929393  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:59:59.947265  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:59:59.967659  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 20:59:59.986203  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:00:00.028839  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:00:00.054242  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:00:00.071784  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:00:00.087997  124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:00:00.123064  124694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:00:00.135539  124694 ssh_runner.go:195] Run: openssl version
	I0108 21:00:00.140139  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:00:00.147247  124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:00:00.150148  124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:00:00.150197  124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:00:00.154652  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:00:00.161321  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:00:00.169127  124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:00:00.171911  124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:00:00.171967  124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:00:00.176639  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:00:00.182896  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:00:00.189696  124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:00:00.210855  124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:00:00.210904  124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:00:00.215636  124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:00:00.222153  124694 kubeadm.go:396] StartCluster: {Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:00:00.222257  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:00:00.222298  124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:00:00.245669  124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
	I0108 21:00:00.245696  124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
	I0108 21:00:00.245706  124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
	I0108 21:00:00.245715  124694 cri.go:87] found id: ""
	I0108 21:00:00.245772  124694 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:00:00.277898  124694 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","pid":1612,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459/rootfs","created":"2023-01-08T20:58:44.075786098Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","pid":2685,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817/rootfs","created":"2023-01-08T20:59:11.277618302Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","pid":3743,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659/rootfs","created":"2023-01-08T20:59:53.536252787Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","pid":3679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb/rootfs","created":"2023-01-08T20:59:52.952963041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io
.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","pid":2625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3/rootfs","created":"2023-01-08T20:59:11.178050048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubern
etes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","pid":2211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae/rootfs","created":"2023-01-08T20:59:03.662818408Z","annotations":{"io.kubernetes.cri.container-type":"sandbo
x","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","pid":1658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c/rootfs","created":"2023-01-08T20:58:44.120902562Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io
.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071/rootfs","created":"2023-01-08T20:59:07.90993923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d220
4fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","pid":1657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd/rootfs","created":"2023-01-08T20:58:44.121187645Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","pid":2210,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a/rootfs","created":"2023-01-08T20:59:03.715705604Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersio
n":"1.0.2-dev","id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","pid":3646,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265/rootfs","created":"2023-01-08T20:59:52.914259586Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","pid":4073,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4/rootfs","created":"2023-01-08T20:59:59.961439321Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","pid":1522,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c/rootfs","created":"2023-01-08T20:58:43.912562088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","pid":1611,"status":"running","bundle":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6/rootfs","created":"2023-01-08T20:58:44.078720095Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","pid":3579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111/rootfs","created":"2023-01-08T20:59:52.820275074Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","pid":3470,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","rootfs":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855/rootfs","created":"2023-01-08T20:59:52.622948749Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","pid":3442,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea7454
6fe19d4e0496d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d/rootfs","created":"2023-01-08T20:59:52.55532244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","pid":1520,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad09153258
04e13945554f39501c29ac7dcf5ab81f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3/rootfs","created":"2023-01-08T20:58:43.914531641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9
b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462/rootfs","created":"2023-01-08T20:59:03.781592888Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","pid":1521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65/rootfs","cre
ated":"2023-01-08T20:58:43.918296824Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d4403
53ac76bf/rootfs","created":"2023-01-08T20:59:11.177965157Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","pid":2686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd1586384
8/rootfs","created":"2023-01-08T20:59:11.277494639Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","pid":1523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f/rootfs","created":"2023-01-08T20:58:43.918339088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox
-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","pid":3427,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67/rootfs","created":"2023-01-08T20:59:52.545724953Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-peri
od":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","pid":3658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0/rootfs","created":"2023-01-08T20:59:52.920247257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.san
dbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","pid":3534,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb/rootfs","created":"2023-01-08T20:59:52.73552926Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-perio
d":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I0108 21:00:00.278314  124694 cri.go:124] list returned 26 containers
	I0108 21:00:00.278332  124694 cri.go:127] container: {ID:065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 Status:running}
	I0108 21:00:00.278347  124694 cri.go:129] skipping 065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 - not in ps
	I0108 21:00:00.278355  124694 cri.go:127] container: {ID:08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 Status:running}
	I0108 21:00:00.278368  124694 cri.go:129] skipping 08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 - not in ps
	I0108 21:00:00.278384  124694 cri.go:127] container: {ID:0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 Status:running}
	I0108 21:00:00.278397  124694 cri.go:133] skipping {0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 running}: state = "running", want "paused"
	I0108 21:00:00.278410  124694 cri.go:127] container: {ID:10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb Status:running}
	I0108 21:00:00.278422  124694 cri.go:129] skipping 10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb - not in ps
	I0108 21:00:00.278433  124694 cri.go:127] container: {ID:12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 Status:running}
	I0108 21:00:00.278442  124694 cri.go:129] skipping 12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 - not in ps
	I0108 21:00:00.278451  124694 cri.go:127] container: {ID:149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae Status:running}
	I0108 21:00:00.278461  124694 cri.go:129] skipping 149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae - not in ps
	I0108 21:00:00.278471  124694 cri.go:127] container: {ID:2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c Status:running}
	I0108 21:00:00.278482  124694 cri.go:129] skipping 2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c - not in ps
	I0108 21:00:00.278493  124694 cri.go:127] container: {ID:2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 Status:running}
	I0108 21:00:00.278502  124694 cri.go:129] skipping 2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 - not in ps
	I0108 21:00:00.278512  124694 cri.go:127] container: {ID:40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd Status:running}
	I0108 21:00:00.278525  124694 cri.go:129] skipping 40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd - not in ps
	I0108 21:00:00.278536  124694 cri.go:127] container: {ID:414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a Status:running}
	I0108 21:00:00.278547  124694 cri.go:129] skipping 414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a - not in ps
	I0108 21:00:00.278554  124694 cri.go:127] container: {ID:41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 Status:running}
	I0108 21:00:00.278566  124694 cri.go:129] skipping 41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 - not in ps
	I0108 21:00:00.278576  124694 cri.go:127] container: {ID:43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 Status:running}
	I0108 21:00:00.278588  124694 cri.go:133] skipping {43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 running}: state = "running", want "paused"
	I0108 21:00:00.278603  124694 cri.go:127] container: {ID:5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c Status:running}
	I0108 21:00:00.278615  124694 cri.go:129] skipping 5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c - not in ps
	I0108 21:00:00.278633  124694 cri.go:127] container: {ID:67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 Status:running}
	I0108 21:00:00.278644  124694 cri.go:129] skipping 67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 - not in ps
	I0108 21:00:00.278651  124694 cri.go:127] container: {ID:7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 Status:running}
	I0108 21:00:00.278660  124694 cri.go:129] skipping 7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 - not in ps
	I0108 21:00:00.278667  124694 cri.go:127] container: {ID:73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 Status:running}
	I0108 21:00:00.278679  124694 cri.go:129] skipping 73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 - not in ps
	I0108 21:00:00.278687  124694 cri.go:127] container: {ID:833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d Status:running}
	I0108 21:00:00.278699  124694 cri.go:129] skipping 833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d - not in ps
	I0108 21:00:00.278707  124694 cri.go:127] container: {ID:a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 Status:running}
	I0108 21:00:00.278719  124694 cri.go:129] skipping a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 - not in ps
	I0108 21:00:00.278729  124694 cri.go:127] container: {ID:c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 Status:running}
	I0108 21:00:00.278737  124694 cri.go:129] skipping c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 - not in ps
	I0108 21:00:00.278744  124694 cri.go:127] container: {ID:c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 Status:running}
	I0108 21:00:00.278756  124694 cri.go:129] skipping c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 - not in ps
	I0108 21:00:00.278767  124694 cri.go:127] container: {ID:c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf Status:running}
	I0108 21:00:00.278780  124694 cri.go:129] skipping c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf - not in ps
	I0108 21:00:00.278790  124694 cri.go:127] container: {ID:c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 Status:running}
	I0108 21:00:00.278804  124694 cri.go:129] skipping c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 - not in ps
	I0108 21:00:00.278814  124694 cri.go:127] container: {ID:d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f Status:running}
	I0108 21:00:00.278822  124694 cri.go:129] skipping d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f - not in ps
	I0108 21:00:00.278830  124694 cri.go:127] container: {ID:ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 Status:running}
	I0108 21:00:00.278842  124694 cri.go:129] skipping ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 - not in ps
	I0108 21:00:00.278852  124694 cri.go:127] container: {ID:ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 Status:running}
	I0108 21:00:00.278862  124694 cri.go:129] skipping ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 - not in ps
	I0108 21:00:00.278872  124694 cri.go:127] container: {ID:ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb Status:running}
	I0108 21:00:00.278883  124694 cri.go:129] skipping ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb - not in ps
	I0108 21:00:00.278925  124694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:00:00.286080  124694 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:00:00.286102  124694 kubeadm.go:627] restartCluster start
	I0108 21:00:00.286141  124694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:00:00.292256  124694 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:00:00.292769  124694 kubeconfig.go:92] found "test-preload-205820" server: "https://192.168.67.2:8443"
	I0108 21:00:00.293379  124694 kapi.go:59] client config for test-preload-205820: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:00:00.293896  124694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:00:00.302755  124694 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-01-08 20:58:39.826861611 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-01-08 20:59:59.816713998 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0108 21:00:00.302770  124694 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:00:00.302789  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:00:00.302824  124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:00:00.329264  124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
	I0108 21:00:00.329296  124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
	I0108 21:00:00.329308  124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
	I0108 21:00:00.329317  124694 cri.go:87] found id: ""
	I0108 21:00:00.329323  124694 cri.go:232] Stopping containers: [43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659]
	I0108 21:00:00.329366  124694 ssh_runner.go:195] Run: which crictl
	I0108 21:00:00.332622  124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659
	I0108 21:00:00.624345  124694 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:00:00.699226  124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:00:00.706356  124694 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Jan  8 20:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 20:58 /etc/kubernetes/scheduler.conf
	
	I0108 21:00:00.706408  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:00:00.713037  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:00:00.719542  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:00:00.725937  124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:00:00.725991  124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:00:00.731944  124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:00:00.738208  124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:00:00.738259  124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:00:00.744328  124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:00:00.750786  124694 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:00:00.750804  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:00.994143  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:01.861835  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:02.144772  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:02.193739  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:02.312980  124694 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:00:02.313046  124694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:00:02.324151  124694 api_server.go:71] duration metric: took 11.177196ms to wait for apiserver process to appear ...
	I0108 21:00:02.324188  124694 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:00:02.324232  124694 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0108 21:00:02.329308  124694 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0108 21:00:02.336848  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:02.336885  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:02.838027  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:02.838054  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:03.338861  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:03.338897  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:03.837783  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:03.837811  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I0108 21:00:04.338312  124694 api_server.go:140] control plane version: v1.24.4
	W0108 21:00:04.338339  124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W0108 21:00:04.837852  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0108 21:00:05.337803  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0108 21:00:05.837782  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W0108 21:00:06.338026  124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I0108 21:00:09.935143  124694 api_server.go:140] control plane version: v1.24.6
	I0108 21:00:09.935175  124694 api_server.go:130] duration metric: took 7.610979606s to wait for apiserver health ...
	I0108 21:00:09.935185  124694 cni.go:95] Creating CNI manager for ""
	I0108 21:00:09.935193  124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:00:09.937716  124694 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:00:09.939281  124694 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:00:10.021100  124694 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I0108 21:00:10.021132  124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:00:10.133101  124694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:00:11.267907  124694 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.134775053s)
	I0108 21:00:11.267939  124694 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:00:11.274594  124694 system_pods.go:59] 6 kube-system pods found
	I0108 21:00:11.274625  124694 system_pods.go:61] "coredns-6d4b75cb6d-48vmf" [d43c5f88-44b8-4ab6-bc5b-f2883eda56e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:00:11.274637  124694 system_pods.go:61] "etcd-test-preload-205820" [f39e5236-110c-4587-8d2c-7da2d7802adc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:00:11.274644  124694 system_pods.go:61] "kindnet-mtvg5" [1257f157-44a7-41fe-9d98-48b85ce53a40] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:00:11.274653  124694 system_pods.go:61] "kube-proxy-wmrz2" [35e9935b-759b-4c18-9d0b-2c0daaab9a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:00:11.274659  124694 system_pods.go:61] "kube-scheduler-test-preload-205820" [e0e1f824-50ae-4a61-b2c6-d7d2bb6f2edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:00:11.274664  124694 system_pods.go:61] "storage-provisioner" [bdbd16cd-b53b-4309-ad17-7915a6d7b693] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 21:00:11.274669  124694 system_pods.go:74] duration metric: took 6.724913ms to wait for pod list to return data ...
	I0108 21:00:11.274676  124694 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:00:11.276970  124694 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:00:11.276995  124694 node_conditions.go:123] node cpu capacity is 8
	I0108 21:00:11.277010  124694 node_conditions.go:105] duration metric: took 2.328282ms to run NodePressure ...
	I0108 21:00:11.277035  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:00:11.436079  124694 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:00:11.439304  124694 kubeadm.go:778] kubelet initialised
	I0108 21:00:11.439324  124694 kubeadm.go:779] duration metric: took 3.225451ms waiting for restarted kubelet to initialise ...
	I0108 21:00:11.439330  124694 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:00:11.443291  124694 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
	I0108 21:00:13.452847  124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:15.453183  124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:17.953269  124694 pod_ready.go:92] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"True"
	I0108 21:00:17.953294  124694 pod_ready.go:81] duration metric: took 6.509981854s waiting for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
	I0108 21:00:17.953304  124694 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
	I0108 21:00:19.962548  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:21.963216  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:23.963314  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:26.462627  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:28.462965  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:30.962959  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:32.963068  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:35.463009  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:37.962454  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:40.462881  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:42.963385  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:45.462486  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:47.962468  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:49.962746  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:51.963178  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:54.463217  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:56.963323  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:00:59.463092  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:01.963156  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:04.463567  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:06.464930  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:08.962935  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:11.463300  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:13.962969  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:16.463128  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:18.963199  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:20.963826  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:23.462743  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:25.463158  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:27.962188  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:29.963079  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:32.464217  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:34.962854  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:37.462215  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:39.462584  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:41.462699  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:43.462915  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:45.963307  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:48.463544  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:50.963045  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:52.963170  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:55.462700  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:57.463256  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:01:59.962706  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:01.962779  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:03.963173  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:06.463371  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:08.463437  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:10.465071  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:12.963206  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:15.462589  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:17.462845  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:19.962938  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:21.963353  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:24.463222  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:26.463680  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:28.962594  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:30.962697  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:32.963185  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:35.462477  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:37.463216  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:39.962881  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:42.462539  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:44.462864  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:46.462968  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:48.962577  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:50.962760  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:53.464211  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:55.963075  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:02:58.463348  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:00.962702  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:02.962942  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:04.963134  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:07.462937  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:09.962917  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:12.462863  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:14.962823  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:17.462424  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:19.462845  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:21.962750  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:24.462946  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:26.463390  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:28.962923  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:30.963325  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:33.462969  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:35.963094  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:38.462979  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:40.963186  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:43.462328  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:45.462741  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:47.962483  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:49.963279  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:51.963334  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:54.462958  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:56.963433  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:58.963562  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:00.963753  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:03.463621  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:05.962769  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:07.962891  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:09.963338  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:12.462686  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:14.463369  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:16.963058  124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:17.957364  124694 pod_ready.go:81] duration metric: took 4m0.004045666s waiting for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
	E0108 21:04:17.957391  124694 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:04:17.957419  124694 pod_ready.go:38] duration metric: took 4m6.518080998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:04:17.957445  124694 kubeadm.go:631] restartCluster took 4m17.671337074s
	W0108 21:04:17.957589  124694 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:04:17.957621  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:04:19.626459  124694 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.668819722s)
	I0108 21:04:19.626516  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:04:19.635943  124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:04:19.642808  124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:04:19.642862  124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:04:19.649319  124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:04:19.649357  124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:04:19.686509  124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0108 21:04:19.686580  124694 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:04:19.714334  124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:04:19.714410  124694 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:04:19.714442  124694 kubeadm.go:317] OS: Linux
	I0108 21:04:19.714480  124694 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:04:19.714520  124694 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:04:19.714613  124694 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:04:19.714688  124694 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:04:19.714729  124694 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:04:19.714777  124694 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:04:19.714821  124694 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:04:19.714864  124694 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:04:19.714905  124694 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:04:19.795815  124694 kubeadm.go:317] W0108 21:04:19.681686    6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:04:19.796049  124694 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:04:19.796184  124694 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:04:19.796272  124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0108 21:04:19.796332  124694 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0108 21:04:19.796381  124694 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0108 21:04:19.796489  124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0108 21:04:19.796595  124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0108 21:04:19.796778  124694 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:19.681686    6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 21:04:19.796820  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:04:20.125925  124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:04:20.135276  124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:04:20.135332  124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:04:20.142002  124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:04:20.142045  124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:04:20.178099  124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I0108 21:04:20.178220  124694 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:04:20.203461  124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:04:20.203557  124694 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:04:20.203613  124694 kubeadm.go:317] OS: Linux
	I0108 21:04:20.203661  124694 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:04:20.203724  124694 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:04:20.203781  124694 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:04:20.203869  124694 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:04:20.203928  124694 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:04:20.203973  124694 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:04:20.204056  124694 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:04:20.204123  124694 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:04:20.204198  124694 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:04:20.268181  124694 kubeadm.go:317] W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:04:20.268365  124694 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:04:20.268449  124694 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:04:20.268528  124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I0108 21:04:20.268566  124694 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I0108 21:04:20.268640  124694 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I0108 21:04:20.268767  124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I0108 21:04:20.268860  124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 21:04:20.268932  124694 kubeadm.go:398] StartCluster complete in 4m20.046785929s
	I0108 21:04:20.268974  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:04:20.269027  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:04:20.291757  124694 cri.go:87] found id: ""
	I0108 21:04:20.291784  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.291794  124694 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:04:20.291800  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:04:20.291843  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:04:20.314092  124694 cri.go:87] found id: ""
	I0108 21:04:20.314115  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.314121  124694 logs.go:276] No container was found matching "etcd"
	I0108 21:04:20.314127  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:04:20.314165  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:04:20.336438  124694 cri.go:87] found id: ""
	I0108 21:04:20.336466  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.336476  124694 logs.go:276] No container was found matching "coredns"
	I0108 21:04:20.336485  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:04:20.336531  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:04:20.360386  124694 cri.go:87] found id: ""
	I0108 21:04:20.360419  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.360428  124694 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:04:20.360436  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:04:20.360477  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:04:20.384216  124694 cri.go:87] found id: ""
	I0108 21:04:20.384244  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.384251  124694 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:04:20.384259  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:04:20.384307  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:04:20.407359  124694 cri.go:87] found id: ""
	I0108 21:04:20.407385  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.407391  124694 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:04:20.407397  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:04:20.407446  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:04:20.429513  124694 cri.go:87] found id: ""
	I0108 21:04:20.429538  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.429547  124694 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:04:20.429554  124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:04:20.429592  124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:04:20.452750  124694 cri.go:87] found id: ""
	I0108 21:04:20.452771  124694 logs.go:274] 0 containers: []
	W0108 21:04:20.452777  124694 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:04:20.452786  124694 logs.go:123] Gathering logs for kubelet ...
	I0108 21:04:20.452797  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:04:20.510605  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893    4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511028  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511172  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038    4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511334  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938056    4359 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511496  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938110    4359 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511664  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938117    4359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.511857  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938151    4359 projected.go:192] Error preparing data for projected volume kube-api-access-wvwgn for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.512266  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938177    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn podName:bdbd16cd-b53b-4309-ad17-7915a6d7b693 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938168618 +0000 UTC m=+8.792977602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wvwgn" (UniqueName: "kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn") pod "storage-provisioner" (UID: "bdbd16cd-b53b-4309-ad17-7915a6d7b693") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.512442  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938217    4359 projected.go:192] Error preparing data for projected volume kube-api-access-s5nz9 for pod kube-system/kindnet-mtvg5: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.512847  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938249    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9 podName:1257f157-44a7-41fe-9d98-48b85ce53a40 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938238341 +0000 UTC m=+8.793047329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s5nz9" (UniqueName: "kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9") pod "kindnet-mtvg5" (UID: "1257f157-44a7-41fe-9d98-48b85ce53a40") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513031  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938309    4359 projected.go:192] Error preparing data for projected volume kube-api-access-9t8jr for pod kube-system/coredns-6d4b75cb6d-48vmf: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513475  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938332    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr podName:d43c5f88-44b8-4ab6-bc5b-f2883eda56e2 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938325487 +0000 UTC m=+8.793134472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9t8jr" (UniqueName: "kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr") pod "coredns-6d4b75cb6d-48vmf" (UID: "d43c5f88-44b8-4ab6-bc5b-f2883eda56e2") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513628  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938363    4359 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	W0108 21:04:20.513802  124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938372    4359 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.534040  124694 logs.go:123] Gathering logs for dmesg ...
	I0108 21:04:20.534063  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:04:20.547468  124694 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:04:20.547515  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:04:20.836897  124694 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:04:20.836920  124694 logs.go:123] Gathering logs for containerd ...
	I0108 21:04:20.836933  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:04:20.891961  124694 logs.go:123] Gathering logs for container status ...
	I0108 21:04:20.891999  124694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0108 21:04:20.917568  124694 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0108 21:04:20.917600  124694 out.go:239] * 
	W0108 21:04:20.917764  124694 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:04:20.917788  124694 out.go:239] * 
	W0108 21:04:20.918668  124694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:04:20.921286  124694 out.go:177] X Problems detected in kubelet:
	I0108 21:04:20.922717  124694 out.go:177]   Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893    4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.925364  124694 out.go:177]   Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978    4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.926971  124694 out.go:177]   Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038    4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
	I0108 21:04:20.929431  124694 out.go:177] 
	W0108 21:04:20.930937  124694 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W0108 21:04:20.173147    6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:04:20.931018  124694 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0108 21:04:20.931068  124694 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0108 21:04:20.932735  124694 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 20:58:22 UTC, end at Sun 2023-01-08 21:04:21 UTC. --
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.927436574Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.943218825Z" level=info msg="StopPodSandbox for \"this\""
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.943264656Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.959410165Z" level=info msg="StopPodSandbox for \"endpoint\""
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.959456780Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.976523539Z" level=info msg="StopPodSandbox for \"is\""
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.976573063Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.992515921Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.992564379Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.008857888Z" level=info msg="StopPodSandbox for \"please\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.008907023Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.025453298Z" level=info msg="StopPodSandbox for \"consider\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.025506712Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.040762943Z" level=info msg="StopPodSandbox for \"using\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.040804963Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.057148884Z" level=info msg="StopPodSandbox for \"full\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.057195124Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.073055648Z" level=info msg="StopPodSandbox for \"URL\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.073099827Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.089148856Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.089197996Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.105182966Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.105229329Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.121419409Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.121466475Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.008448] FS-Cache: Duplicate cookie detected
	[  +0.005292] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006738] FS-Cache: O-cookie d=00000000b351f190{9p.inode} n=00000000b94a5e01
	[  +0.008741] FS-Cache: O-key=[8] '8ea00f0200000000'
	[  +0.006286] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007953] FS-Cache: N-cookie d=00000000b351f190{9p.inode} n=000000008bdebc64
	[  +0.008734] FS-Cache: N-key=[8] '8ea00f0200000000'
	[  +3.644617] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006733] FS-Cache: O-cookie d=00000000b351f190{9p.inode} n=000000002647edbf
	[  +0.007353] FS-Cache: O-key=[8] '8da00f0200000000'
	[  +0.004933] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006615] FS-Cache: N-cookie d=00000000b351f190{9p.inode} n=000000002ffef31b
	[  +0.008707] FS-Cache: N-key=[8] '8da00f0200000000'
	[  +0.360206] FS-Cache: Duplicate cookie detected
	[  +0.004682] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006745] FS-Cache: O-cookie d=00000000b351f190{9p.inode} n=00000000ca95e3ed
	[  +0.007364] FS-Cache: O-key=[8] '98a00f0200000000'
	[  +0.005138] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007934] FS-Cache: N-cookie d=00000000b351f190{9p.inode} n=000000009a7f623e
	[  +0.008739] FS-Cache: N-key=[8] '98a00f0200000000'
	[Jan 8 20:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 21:00] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000386] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.011260] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  21:04:21 up 46 min,  0 users,  load average: 0.32, 0.59, 0.69
	Linux test-preload-205820 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 20:58:22 UTC, end at Sun 2023-01-08 21:04:22 UTC. --
	Jan 08 21:02:52 test-preload-205820 kubelet[4359]: I0108 21:02:52.350590    4359 scope.go:110] "RemoveContainer" containerID="331e6cfcf9c146cb0bb87ed8961668f3b1301b48f3d6c4fe14f75657e855c72c"
	Jan 08 21:02:52 test-preload-205820 kubelet[4359]: I0108 21:02:52.799370    4359 scope.go:110] "RemoveContainer" containerID="331e6cfcf9c146cb0bb87ed8961668f3b1301b48f3d6c4fe14f75657e855c72c"
	Jan 08 21:02:52 test-preload-205820 kubelet[4359]: I0108 21:02:52.799742    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:02:52 test-preload-205820 kubelet[4359]: E0108 21:02:52.800239    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:02:58 test-preload-205820 kubelet[4359]: I0108 21:02:58.243657    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:02:58 test-preload-205820 kubelet[4359]: E0108 21:02:58.244202    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:02:58 test-preload-205820 kubelet[4359]: I0108 21:02:58.812972    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:02:58 test-preload-205820 kubelet[4359]: E0108 21:02:58.813284    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:02:59 test-preload-205820 kubelet[4359]: I0108 21:02:59.814359    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:02:59 test-preload-205820 kubelet[4359]: E0108 21:02:59.814663    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:03:11 test-preload-205820 kubelet[4359]: I0108 21:03:11.350653    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:03:11 test-preload-205820 kubelet[4359]: E0108 21:03:11.350978    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:03:24 test-preload-205820 kubelet[4359]: I0108 21:03:24.350602    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:03:24 test-preload-205820 kubelet[4359]: E0108 21:03:24.351149    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:03:35 test-preload-205820 kubelet[4359]: I0108 21:03:35.350448    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:03:35 test-preload-205820 kubelet[4359]: E0108 21:03:35.351057    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:03:47 test-preload-205820 kubelet[4359]: I0108 21:03:47.350062    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:03:47 test-preload-205820 kubelet[4359]: E0108 21:03:47.350424    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:04:01 test-preload-205820 kubelet[4359]: I0108 21:04:01.349900    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:04:01 test-preload-205820 kubelet[4359]: E0108 21:04:01.350244    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:04:12 test-preload-205820 kubelet[4359]: I0108 21:04:12.350774    4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
	Jan 08 21:04:12 test-preload-205820 kubelet[4359]: E0108 21:04:12.351100    4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
	Jan 08 21:04:18 test-preload-205820 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jan 08 21:04:18 test-preload-205820 systemd[1]: kubelet.service: Succeeded.
	Jan 08 21:04:18 test-preload-205820 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:04:21.980071  129545 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-205820 -n test-preload-205820
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-205820 -n test-preload-205820: exit status 2 (341.899606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-205820" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-205820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-205820
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-205820: (2.038663381s)
--- FAIL: TestPreload (364.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (566.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-210902 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-210902 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.76227418s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-210902
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-210902: (1.305796516s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-210902 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-210902 status --format={{.Host}}: exit status 7 (111.541042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-210902 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-210902 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m39.009096931s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-210902] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-210902 in cluster kubernetes-upgrade-210902
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-210902" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12477]: E0108 21:17:34.195356   12477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12488]: E0108 21:17:34.945977   12488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:17:35 kubernetes-upgrade-210902 kubelet[12499]: E0108 21:17:35.695839   12499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:09:45.378073  181838 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:09:45.378253  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:45.378260  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:09:45.378267  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:09:45.378736  181838 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:09:45.380254  181838 out.go:303] Setting JSON to false
	I0108 21:09:45.382465  181838 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3135,"bootTime":1673209051,"procs":1257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:09:45.382545  181838 start.go:135] virtualization: kvm guest
	I0108 21:09:45.385698  181838 out.go:177] * [kubernetes-upgrade-210902] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:09:45.387667  181838 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:09:45.387702  181838 notify.go:220] Checking for updates...
	I0108 21:09:45.391654  181838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:09:45.393538  181838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:09:45.395504  181838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:09:45.397452  181838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:09:45.399752  181838 config.go:180] Loaded profile config "kubernetes-upgrade-210902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:09:45.400316  181838 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:09:45.434745  181838 docker.go:137] docker version: linux-20.10.22
	I0108 21:09:45.434871  181838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:45.552000  181838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2023-01-08 21:09:45.45999102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:45.552088  181838 docker.go:254] overlay module found
	I0108 21:09:45.554740  181838 out.go:177] * Using the docker driver based on existing profile
	I0108 21:09:45.556191  181838 start.go:294] selected driver: docker
	I0108 21:09:45.556206  181838 start.go:838] validating driver "docker" against &{Name:kubernetes-upgrade-210902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-210902 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:09:45.556308  181838 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:09:45.557207  181838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:09:45.672746  181838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2023-01-08 21:09:45.582680616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:09:45.673087  181838 cni.go:95] Creating CNI manager for ""
	I0108 21:09:45.673111  181838 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:09:45.673123  181838 start_flags.go:317] config:
	{Name:kubernetes-upgrade-210902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-210902 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:09:45.675658  181838 out.go:177] * Starting control plane node kubernetes-upgrade-210902 in cluster kubernetes-upgrade-210902
	I0108 21:09:45.677488  181838 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:09:45.679215  181838 out.go:177] * Pulling base image ...
	I0108 21:09:45.680992  181838 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:09:45.681029  181838 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:09:45.681039  181838 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:09:45.681062  181838 cache.go:57] Caching tarball of preloaded images
	I0108 21:09:45.681280  181838 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:09:45.681297  181838 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:09:45.681417  181838 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/config.json ...
	I0108 21:09:45.714166  181838 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:09:45.714197  181838 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:09:45.714214  181838 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:09:45.714255  181838 start.go:364] acquiring machines lock for kubernetes-upgrade-210902: {Name:mk95e8c0875195a6adbe782ee8cb8ec34802f5f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:09:45.714376  181838 start.go:368] acquired machines lock for "kubernetes-upgrade-210902" in 83.259µs
	I0108 21:09:45.714399  181838 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:09:45.714408  181838 fix.go:55] fixHost starting: 
	I0108 21:09:45.714686  181838 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-210902 --format={{.State.Status}}
	I0108 21:09:45.744483  181838 fix.go:103] recreateIfNeeded on kubernetes-upgrade-210902: state=Stopped err=<nil>
	W0108 21:09:45.744553  181838 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:09:45.747643  181838 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-210902" ...
	I0108 21:09:45.749482  181838 cli_runner.go:164] Run: docker start kubernetes-upgrade-210902
	I0108 21:09:46.173907  181838 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-210902 --format={{.State.Status}}
	I0108 21:09:46.201895  181838 kic.go:415] container "kubernetes-upgrade-210902" state is running.
	I0108 21:09:46.202352  181838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-210902
	I0108 21:09:46.234443  181838 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/config.json ...
	I0108 21:09:46.234717  181838 machine.go:88] provisioning docker machine ...
	I0108 21:09:46.234756  181838 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-210902"
	I0108 21:09:46.234819  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:46.268093  181838 main.go:134] libmachine: Using SSH client type: native
	I0108 21:09:46.268342  181838 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0108 21:09:46.268373  181838 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-210902 && echo "kubernetes-upgrade-210902" | sudo tee /etc/hostname
	I0108 21:09:46.269098  181838 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33384->127.0.0.1:32972: read: connection reset by peer
	I0108 21:09:49.405186  181838 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-210902
	
	I0108 21:09:49.405271  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:49.430482  181838 main.go:134] libmachine: Using SSH client type: native
	I0108 21:09:49.430640  181838 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32972 <nil> <nil>}
	I0108 21:09:49.430660  181838 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-210902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-210902/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-210902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:09:49.547320  181838 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:09:49.547357  181838 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:09:49.547395  181838 ubuntu.go:177] setting up certificates
	I0108 21:09:49.547411  181838 provision.go:83] configureAuth start
	I0108 21:09:49.547465  181838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-210902
	I0108 21:09:49.571694  181838 provision.go:138] copyHostCerts
	I0108 21:09:49.571757  181838 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:09:49.571770  181838 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:09:49.571842  181838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:09:49.571936  181838 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:09:49.571944  181838 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:09:49.571970  181838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:09:49.572033  181838 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:09:49.572040  181838 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:09:49.572062  181838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:09:49.572107  181838 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-210902 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-210902]
	I0108 21:09:49.724372  181838 provision.go:172] copyRemoteCerts
	I0108 21:09:49.724458  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:09:49.724503  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:49.749972  181838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/kubernetes-upgrade-210902/id_rsa Username:docker}
	I0108 21:09:49.838599  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0108 21:09:49.856071  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:09:49.922485  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:09:49.940098  181838 provision.go:86] duration metric: configureAuth took 392.673924ms
	I0108 21:09:49.940122  181838 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:09:49.940280  181838 config.go:180] Loaded profile config "kubernetes-upgrade-210902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:09:49.940291  181838 machine.go:91] provisioned docker machine in 3.705555535s
	I0108 21:09:49.940299  181838 start.go:300] post-start starting for "kubernetes-upgrade-210902" (driver="docker")
	I0108 21:09:49.940305  181838 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:09:49.940345  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:09:49.940381  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:49.964815  181838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/kubernetes-upgrade-210902/id_rsa Username:docker}
	I0108 21:09:50.051124  181838 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:09:50.054085  181838 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:09:50.054107  181838 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:09:50.054115  181838 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:09:50.054124  181838 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:09:50.054138  181838 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:09:50.054196  181838 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:09:50.054281  181838 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:09:50.054375  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:09:50.061094  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:09:50.078098  181838 start.go:303] post-start completed in 137.785615ms
	I0108 21:09:50.078176  181838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:09:50.078225  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:50.103064  181838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/kubernetes-upgrade-210902/id_rsa Username:docker}
	I0108 21:09:50.183944  181838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:09:50.187764  181838 fix.go:57] fixHost completed within 4.473350541s
	I0108 21:09:50.187787  181838 start.go:83] releasing machines lock for "kubernetes-upgrade-210902", held for 4.473396827s
	I0108 21:09:50.187869  181838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-210902
	I0108 21:09:50.223066  181838 ssh_runner.go:195] Run: cat /version.json
	I0108 21:09:50.223119  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:50.223332  181838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:09:50.223412  181838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-210902
	I0108 21:09:50.249867  181838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/kubernetes-upgrade-210902/id_rsa Username:docker}
	I0108 21:09:50.250270  181838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32972 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/kubernetes-upgrade-210902/id_rsa Username:docker}
	I0108 21:09:50.359227  181838 ssh_runner.go:195] Run: systemctl --version
	I0108 21:09:50.363288  181838 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:09:50.376079  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:09:50.387433  181838 docker.go:189] disabling docker service ...
	I0108 21:09:50.488838  181838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:09:50.499220  181838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:09:50.508729  181838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:09:50.588736  181838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:09:50.681272  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:09:50.690370  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:09:50.734443  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:09:50.752230  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:09:50.782325  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:09:50.847045  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:09:50.908726  181838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:09:50.915636  181838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:09:50.922788  181838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:09:50.994240  181838 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:09:51.075792  181838 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:09:51.075850  181838 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:09:51.079643  181838 start.go:472] Will wait 60s for crictl version
	I0108 21:09:51.079697  181838 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:09:51.103176  181838 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:09:51Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:10:02.151460  181838 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:10:02.178344  181838 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:10:02.178413  181838 ssh_runner.go:195] Run: containerd --version
	I0108 21:10:02.205269  181838 ssh_runner.go:195] Run: containerd --version
	I0108 21:10:02.242408  181838 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:10:02.244083  181838 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-210902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:10:02.270651  181838 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0108 21:10:02.273971  181838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:10:02.285839  181838 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0108 21:10:02.287567  181838 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:10:02.287634  181838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:10:02.313605  181838 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.3". assuming images are not preloaded.
	I0108 21:10:02.313670  181838 ssh_runner.go:195] Run: which lz4
	I0108 21:10:02.317692  181838 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0108 21:10:02.321174  181838 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0108 21:10:02.321200  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (669534256 bytes)
	I0108 21:10:03.849575  181838 containerd.go:496] Took 1.531913 seconds to copy over tarball
	I0108 21:10:03.849668  181838 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:10:06.264040  181838 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.414337546s)
	I0108 21:10:06.264067  181838 containerd.go:503] Took 2.414446 seconds t extract the tarball
	I0108 21:10:06.264078  181838 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:10:06.357029  181838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:10:06.434972  181838 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:10:06.499911  181838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:10:06.526878  181838 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 21:10:06.526949  181838 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:10:06.526991  181838 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:10:06.527006  181838 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:10:06.527044  181838 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I0108 21:10:06.527097  181838 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I0108 21:10:06.527165  181838 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:10:06.526971  181838 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:10:06.527332  181838 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:10:06.528279  181838 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I0108 21:10:06.528282  181838 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I0108 21:10:06.528298  181838 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:10:06.528311  181838 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.3: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:10:06.528355  181838 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.3: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:10:06.528281  181838 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.3: Error: No such image: registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:10:06.528453  181838 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.3: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:10:06.528295  181838 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:10:06.697975  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I0108 21:10:06.703192  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.3"
	I0108 21:10:06.707905  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I0108 21:10:06.714468  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.3"
	I0108 21:10:06.716539  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I0108 21:10:06.725995  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.3"
	I0108 21:10:06.751772  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.3"
	I0108 21:10:07.304580  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 21:10:07.413489  181838 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0108 21:10:07.413596  181838 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:10:07.413668  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.435441  181838 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.3" does not exist at hash "60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a" in container runtime
	I0108 21:10:07.435532  181838 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:10:07.435589  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.435731  181838 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I0108 21:10:07.435822  181838 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.3" needs transfer: "registry.k8s.io/kube-proxy:v1.25.3" does not exist at hash "beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041" in container runtime
	I0108 21:10:07.435843  181838 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I0108 21:10:07.435856  181838 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:10:07.435895  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.435911  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.519774  181838 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I0108 21:10:07.519825  181838 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I0108 21:10:07.519865  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.524877  181838 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.3" does not exist at hash "0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0" in container runtime
	I0108 21:10:07.524919  181838 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:10:07.524957  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.549764  181838 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.3" does not exist at hash "6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912" in container runtime
	I0108 21:10:07.549813  181838 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:10:07.549858  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.669755  181838 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 21:10:07.669800  181838 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:10:07.669802  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:10:07.669837  181838 ssh_runner.go:195] Run: which crictl
	I0108 21:10:07.669856  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:10:07.669906  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:10:07.669957  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I0108 21:10:07.669993  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:10:07.669958  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I0108 21:10:07.670039  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:10:08.170938  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
	I0108 21:10:08.170941  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:10:08.171045  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3
	I0108 21:10:08.172805  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0108 21:10:08.172921  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I0108 21:10:08.172988  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
	I0108 21:10:08.173064  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0108 21:10:08.174555  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
	I0108 21:10:08.174645  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0108 21:10:08.174685  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I0108 21:10:08.174658  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
	I0108 21:10:08.174748  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I0108 21:10:08.174782  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0108 21:10:08.174672  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I0108 21:10:08.174869  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I0108 21:10:08.208683  181838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 21:10:08.208735  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.3': No such file or directory
	I0108 21:10:08.208760  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 --> /var/lib/minikube/images/kube-proxy_v1.25.3 (20268032 bytes)
	I0108 21:10:08.208781  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0108 21:10:08.208809  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0108 21:10:08.208844  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.3': No such file or directory
	I0108 21:10:08.208785  181838 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0108 21:10:08.208867  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 --> /var/lib/minikube/images/kube-controller-manager_v1.25.3 (31264768 bytes)
	I0108 21:10:08.208884  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.3': No such file or directory
	I0108 21:10:08.208901  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 --> /var/lib/minikube/images/kube-apiserver_v1.25.3 (34241024 bytes)
	I0108 21:10:08.208930  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I0108 21:10:08.208958  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I0108 21:10:08.209006  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I0108 21:10:08.209038  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I0108 21:10:08.208847  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.3': No such file or directory
	I0108 21:10:08.209204  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 --> /var/lib/minikube/images/kube-scheduler_v1.25.3 (15801856 bytes)
	I0108 21:10:08.219677  181838 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0108 21:10:08.219709  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0108 21:10:08.250780  181838 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I0108 21:10:08.250850  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I0108 21:10:08.481462  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I0108 21:10:08.481509  181838 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 21:10:08.481567  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0108 21:10:09.108365  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 21:10:09.108406  181838 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0108 21:10:09.108466  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I0108 21:10:09.692103  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0108 21:10:09.692142  181838 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0108 21:10:09.692198  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0108 21:10:10.472021  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 from cache
	I0108 21:10:10.472053  181838 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.3
	I0108 21:10:10.472100  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3
	I0108 21:10:11.034303  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 from cache
	I0108 21:10:11.034346  181838 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0108 21:10:11.034389  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0108 21:10:12.136037  181838 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3: (1.101612438s)
	I0108 21:10:12.136070  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 from cache
	I0108 21:10:12.136099  181838 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0108 21:10:12.136146  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0108 21:10:13.335916  181838 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3: (1.19974472s)
	I0108 21:10:13.335941  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 from cache
	I0108 21:10:13.335961  181838 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I0108 21:10:13.336003  181838 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I0108 21:10:16.756529  181838 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (3.420502362s)
	I0108 21:10:16.756557  181838 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I0108 21:10:16.756578  181838 cache_images.go:123] Successfully loaded all cached images
	I0108 21:10:16.756586  181838 cache_images.go:92] LoadImages completed in 10.22967472s
	I0108 21:10:16.756680  181838 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:10:16.780932  181838 cni.go:95] Creating CNI manager for ""
	I0108 21:10:16.780965  181838 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:10:16.780979  181838 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:10:16.780997  181838 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-210902 NodeName:kubernetes-upgrade-210902 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:10:16.781143  181838 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-210902"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:10:16.781250  181838 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-210902 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-210902 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:10:16.781305  181838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:10:16.788436  181838 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:10:16.788499  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:10:16.795065  181838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (549 bytes)
	I0108 21:10:16.807372  181838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:10:16.820006  181838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0108 21:10:16.832329  181838 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:10:16.835283  181838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:10:16.844293  181838 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902 for IP: 192.168.76.2
	I0108 21:10:16.844395  181838 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:10:16.844448  181838 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:10:16.844530  181838 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/client.key
	I0108 21:10:16.844598  181838 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/apiserver.key.31bdca25
	I0108 21:10:16.844659  181838 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/proxy-client.key
	I0108 21:10:16.844790  181838 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:10:16.844831  181838 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:10:16.844845  181838 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:10:16.844876  181838 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:10:16.844908  181838 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:10:16.844945  181838 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:10:16.844994  181838 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:10:16.845560  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:10:16.862265  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:10:16.878456  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:10:16.894771  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:10:16.911266  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:10:16.927896  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:10:16.944229  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:10:16.962323  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:10:16.978771  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:10:16.995487  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:10:17.012041  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:10:17.028485  181838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:10:17.040458  181838 ssh_runner.go:195] Run: openssl version
	I0108 21:10:17.045243  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:10:17.052584  181838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:10:17.055666  181838 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:10:17.055713  181838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:10:17.060398  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:10:17.066968  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:10:17.074299  181838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:10:17.077269  181838 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:10:17.077314  181838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:10:17.082008  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:10:17.088734  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:10:17.095988  181838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:10:17.098963  181838 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:10:17.099014  181838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:10:17.103912  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:10:17.110955  181838 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-210902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-210902 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:10:17.111049  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:10:17.111083  181838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:10:17.135124  181838 cri.go:87] found id: ""
	I0108 21:10:17.135194  181838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:10:17.142378  181838 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:10:17.142399  181838 kubeadm.go:627] restartCluster start
	I0108 21:10:17.142433  181838 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:10:17.149150  181838 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:10:17.149641  181838 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-210902" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:10:17.149912  181838 kubeconfig.go:146] "kubernetes-upgrade-210902" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:10:17.150327  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:10:17.151015  181838 kapi.go:59] client config for kubernetes-upgrade-210902: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kubernetes-upgrade-210902/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:10:17.151595  181838 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:10:17.158601  181838 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-01-08 21:09:13.817253119 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-01-08 21:10:16.825863985 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-210902
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.25.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0108 21:10:17.158619  181838 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:10:17.158631  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:10:17.158680  181838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:10:17.186862  181838 cri.go:87] found id: ""
	I0108 21:10:17.186918  181838 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:10:17.197604  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:10:17.206390  181838 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Jan  8 21:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Jan  8 21:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Jan  8 21:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Jan  8 21:09 /etc/kubernetes/scheduler.conf
	
	I0108 21:10:17.206452  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:10:17.214541  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:10:17.221404  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:10:17.228077  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:10:17.234961  181838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:10:17.242527  181838 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:10:17.242548  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:10:17.283992  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:10:17.869613  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:10:18.009652  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:10:18.062378  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:10:18.106955  181838 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:10:18.107018  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:18.617410  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:19.117795  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:19.617290  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:20.117669  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:20.617194  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:21.116891  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:21.616755  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:22.117750  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:22.617144  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:23.117487  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:23.617741  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:24.116952  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:24.617681  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:25.116986  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:25.616823  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:26.117630  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:26.617619  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:27.117499  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:27.617259  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:28.117464  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:28.617569  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:29.116982  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:29.617359  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:30.117594  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:30.617240  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:31.117698  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:31.617099  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:32.117725  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:32.617668  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:33.116811  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:33.617090  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:34.116944  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:34.617255  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:35.117364  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:35.617844  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:36.117132  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:36.617624  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:37.117211  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:37.617714  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:38.117377  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:38.617782  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:39.116929  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:39.617616  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:40.117668  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:40.617162  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:41.116864  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:41.617449  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:42.117833  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:42.617604  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:43.117061  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:43.617102  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:44.117578  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:44.616770  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:45.116822  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:45.617649  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:46.117587  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:46.617220  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:47.117073  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:47.617820  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:48.116985  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:48.616942  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:49.117306  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:49.617078  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:50.117340  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:50.616782  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:51.116953  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:51.617571  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:52.117410  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:52.618599  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:53.117472  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:53.616768  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:54.116912  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:54.617683  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:55.116850  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:55.616911  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:56.117046  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:56.616842  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:57.117814  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:57.617749  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:58.117797  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:58.617472  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:59.117676  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:59.617185  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:00.116851  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:00.617234  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:01.117107  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:01.617618  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:02.117444  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:02.616946  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:03.117710  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:03.616986  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:04.116862  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:04.617700  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:05.116915  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:05.617529  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:06.116831  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:06.617335  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:07.117237  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:07.616893  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:08.116841  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:08.617077  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:09.117721  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:09.617716  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:10.117405  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:10.616879  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:11.117483  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:11.617017  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:12.116805  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:12.616925  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:13.117264  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:13.616978  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:14.117292  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:14.617688  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:15.117151  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:15.616915  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:16.116815  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:16.617581  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:17.116849  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:17.617354  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:18.117233  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:18.117294  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:18.141479  181838 cri.go:87] found id: ""
	I0108 21:11:18.141507  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.141515  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:11:18.141521  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:11:18.141582  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:18.173828  181838 cri.go:87] found id: ""
	I0108 21:11:18.173854  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.173863  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:11:18.173871  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:11:18.173926  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:18.202471  181838 cri.go:87] found id: ""
	I0108 21:11:18.202496  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.202505  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:11:18.202513  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:18.202567  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:18.229511  181838 cri.go:87] found id: ""
	I0108 21:11:18.229552  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.229560  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:11:18.229568  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:18.229619  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:18.253478  181838 cri.go:87] found id: ""
	I0108 21:11:18.253504  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.253513  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:11:18.253521  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:11:18.253590  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:11:18.284288  181838 cri.go:87] found id: ""
	I0108 21:11:18.284313  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.284323  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:11:18.284331  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:11:18.284382  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:11:18.310108  181838 cri.go:87] found id: ""
	I0108 21:11:18.310143  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.310154  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:11:18.310163  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:18.310223  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:18.333812  181838 cri.go:87] found id: ""
	I0108 21:11:18.333837  181838 logs.go:274] 0 containers: []
	W0108 21:11:18.333843  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:11:18.333851  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:11:18.333865  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:11:18.378571  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:11:18.378618  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:18.409451  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:18.409484  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:11:18.427826  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:28 kubernetes-upgrade-210902 kubelet[1392]: E0108 21:10:28.699911    1392 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.428196  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:29 kubernetes-upgrade-210902 kubelet[1405]: E0108 21:10:29.445280    1405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.428550  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:30 kubernetes-upgrade-210902 kubelet[1420]: E0108 21:10:30.198183    1420 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.428900  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:30 kubernetes-upgrade-210902 kubelet[1432]: E0108 21:10:30.947824    1432 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.429261  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:31 kubernetes-upgrade-210902 kubelet[1449]: E0108 21:10:31.698365    1449 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.429621  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:32 kubernetes-upgrade-210902 kubelet[1462]: E0108 21:10:32.451382    1462 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.429983  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:33 kubernetes-upgrade-210902 kubelet[1477]: E0108 21:10:33.199348    1477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.430335  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:33 kubernetes-upgrade-210902 kubelet[1490]: E0108 21:10:33.955437    1490 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.430683  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:34 kubernetes-upgrade-210902 kubelet[1504]: E0108 21:10:34.696790    1504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.431037  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:35 kubernetes-upgrade-210902 kubelet[1517]: E0108 21:10:35.447759    1517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.431390  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:36 kubernetes-upgrade-210902 kubelet[1532]: E0108 21:10:36.193334    1532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.431792  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:36 kubernetes-upgrade-210902 kubelet[1545]: E0108 21:10:36.944823    1545 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.432152  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:37 kubernetes-upgrade-210902 kubelet[1560]: E0108 21:10:37.695242    1560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.432498  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:38 kubernetes-upgrade-210902 kubelet[1573]: E0108 21:10:38.447870    1573 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.432845  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:39 kubernetes-upgrade-210902 kubelet[1587]: E0108 21:10:39.201625    1587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.433199  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:39 kubernetes-upgrade-210902 kubelet[1600]: E0108 21:10:39.946010    1600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.433548  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:40 kubernetes-upgrade-210902 kubelet[1615]: E0108 21:10:40.708186    1615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.433899  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:41 kubernetes-upgrade-210902 kubelet[1627]: E0108 21:10:41.446320    1627 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.434251  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:42 kubernetes-upgrade-210902 kubelet[1643]: E0108 21:10:42.196140    1643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.434601  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:42 kubernetes-upgrade-210902 kubelet[1656]: E0108 21:10:42.947328    1656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.434964  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:43 kubernetes-upgrade-210902 kubelet[1671]: E0108 21:10:43.693703    1671 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.435316  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:44 kubernetes-upgrade-210902 kubelet[1683]: E0108 21:10:44.450190    1683 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.435685  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:45 kubernetes-upgrade-210902 kubelet[1699]: E0108 21:10:45.202501    1699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.436036  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:45 kubernetes-upgrade-210902 kubelet[1712]: E0108 21:10:45.955937    1712 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.436386  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:46 kubernetes-upgrade-210902 kubelet[1726]: E0108 21:10:46.707780    1726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.436743  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:47 kubernetes-upgrade-210902 kubelet[1740]: E0108 21:10:47.445782    1740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.437095  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:48 kubernetes-upgrade-210902 kubelet[1755]: E0108 21:10:48.199282    1755 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.437451  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:48 kubernetes-upgrade-210902 kubelet[1768]: E0108 21:10:48.949498    1768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.437820  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:49 kubernetes-upgrade-210902 kubelet[1783]: E0108 21:10:49.711401    1783 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.438176  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:50 kubernetes-upgrade-210902 kubelet[1795]: E0108 21:10:50.452285    1795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.438530  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:51 kubernetes-upgrade-210902 kubelet[1810]: E0108 21:10:51.201842    1810 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.438877  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:51 kubernetes-upgrade-210902 kubelet[1824]: E0108 21:10:51.956982    1824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.439243  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:52 kubernetes-upgrade-210902 kubelet[1838]: E0108 21:10:52.712808    1838 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.439617  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:53 kubernetes-upgrade-210902 kubelet[1850]: E0108 21:10:53.458650    1850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.439973  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:54 kubernetes-upgrade-210902 kubelet[1865]: E0108 21:10:54.197765    1865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.440329  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:54 kubernetes-upgrade-210902 kubelet[1878]: E0108 21:10:54.944887    1878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.440679  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:55 kubernetes-upgrade-210902 kubelet[1893]: E0108 21:10:55.693217    1893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.441031  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:56 kubernetes-upgrade-210902 kubelet[1907]: E0108 21:10:56.445774    1907 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.441384  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:57 kubernetes-upgrade-210902 kubelet[1922]: E0108 21:10:57.199154    1922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.441737  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:57 kubernetes-upgrade-210902 kubelet[1934]: E0108 21:10:57.947527    1934 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.442087  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:58 kubernetes-upgrade-210902 kubelet[1949]: E0108 21:10:58.694057    1949 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.442452  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:59 kubernetes-upgrade-210902 kubelet[1961]: E0108 21:10:59.466637    1961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.442803  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1976]: E0108 21:11:00.203063    1976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.443159  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1989]: E0108 21:11:00.945615    1989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.443541  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:01 kubernetes-upgrade-210902 kubelet[2004]: E0108 21:11:01.696353    2004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.444038  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:02 kubernetes-upgrade-210902 kubelet[2017]: E0108 21:11:02.454662    2017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.444397  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2031]: E0108 21:11:03.210934    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.444751  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2043]: E0108 21:11:03.962266    2043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.445101  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:04 kubernetes-upgrade-210902 kubelet[2058]: E0108 21:11:04.694055    2058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.445453  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:05 kubernetes-upgrade-210902 kubelet[2071]: E0108 21:11:05.450370    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.445801  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2085]: E0108 21:11:06.195901    2085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.446152  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2098]: E0108 21:11:06.961189    2098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.446501  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:07 kubernetes-upgrade-210902 kubelet[2116]: E0108 21:11:07.704321    2116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.446855  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:08 kubernetes-upgrade-210902 kubelet[2128]: E0108 21:11:08.447741    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.447209  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2143]: E0108 21:11:09.204880    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.447580  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2156]: E0108 21:11:09.971440    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.447931  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:10 kubernetes-upgrade-210902 kubelet[2170]: E0108 21:11:10.701229    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.448287  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:11 kubernetes-upgrade-210902 kubelet[2183]: E0108 21:11:11.470164    2183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.448638  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2197]: E0108 21:11:12.206854    2197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.448982  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2209]: E0108 21:11:12.946066    2209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.449332  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:13 kubernetes-upgrade-210902 kubelet[2223]: E0108 21:11:13.695518    2223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.449680  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:14 kubernetes-upgrade-210902 kubelet[2236]: E0108 21:11:14.445905    2236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.450026  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.450382  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.450728  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.451082  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.451440  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:18.451615  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:18.451635  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:18.492880  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:18.492910  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:11:18.572250  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:11:18.572281  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:18.572296  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:11:18.572425  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:11:18.572440  181838 out.go:239]   Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.572448  181838 out.go:239]   Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.572455  181838 out.go:239]   Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.572470  181838 out.go:239]   Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:18.572483  181838 out.go:239]   Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:18.572493  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:18.572502  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:11:28.573918  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:28.617008  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:28.617079  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:28.662127  181838 cri.go:87] found id: ""
	I0108 21:11:28.662149  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.662158  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:11:28.662165  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:11:28.662219  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:28.705395  181838 cri.go:87] found id: ""
	I0108 21:11:28.705418  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.705426  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:11:28.705434  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:11:28.705484  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:28.756144  181838 cri.go:87] found id: ""
	I0108 21:11:28.756167  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.756181  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:11:28.756189  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:28.756238  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:28.793518  181838 cri.go:87] found id: ""
	I0108 21:11:28.793539  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.793545  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:11:28.793551  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:28.793590  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:28.838655  181838 cri.go:87] found id: ""
	I0108 21:11:28.838685  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.838694  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:11:28.838701  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:11:28.838752  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:11:28.881033  181838 cri.go:87] found id: ""
	I0108 21:11:28.881057  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.881063  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:11:28.881070  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:11:28.881112  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:11:28.905380  181838 cri.go:87] found id: ""
	I0108 21:11:28.905402  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.905407  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:11:28.905414  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:28.905462  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:28.966504  181838 cri.go:87] found id: ""
	I0108 21:11:28.966529  181838 logs.go:274] 0 containers: []
	W0108 21:11:28.966538  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:11:28.966549  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:28.966564  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:11:28.986497  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:39 kubernetes-upgrade-210902 kubelet[1587]: E0108 21:10:39.201625    1587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.987117  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:39 kubernetes-upgrade-210902 kubelet[1600]: E0108 21:10:39.946010    1600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.987682  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:40 kubernetes-upgrade-210902 kubelet[1615]: E0108 21:10:40.708186    1615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.988299  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:41 kubernetes-upgrade-210902 kubelet[1627]: E0108 21:10:41.446320    1627 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.988887  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:42 kubernetes-upgrade-210902 kubelet[1643]: E0108 21:10:42.196140    1643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.989335  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:42 kubernetes-upgrade-210902 kubelet[1656]: E0108 21:10:42.947328    1656 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.989828  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:43 kubernetes-upgrade-210902 kubelet[1671]: E0108 21:10:43.693703    1671 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.990431  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:44 kubernetes-upgrade-210902 kubelet[1683]: E0108 21:10:44.450190    1683 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.990964  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:45 kubernetes-upgrade-210902 kubelet[1699]: E0108 21:10:45.202501    1699 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.991575  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:45 kubernetes-upgrade-210902 kubelet[1712]: E0108 21:10:45.955937    1712 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.992131  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:46 kubernetes-upgrade-210902 kubelet[1726]: E0108 21:10:46.707780    1726 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.992577  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:47 kubernetes-upgrade-210902 kubelet[1740]: E0108 21:10:47.445782    1740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.992992  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:48 kubernetes-upgrade-210902 kubelet[1755]: E0108 21:10:48.199282    1755 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.993384  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:48 kubernetes-upgrade-210902 kubelet[1768]: E0108 21:10:48.949498    1768 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.993913  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:49 kubernetes-upgrade-210902 kubelet[1783]: E0108 21:10:49.711401    1783 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.994472  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:50 kubernetes-upgrade-210902 kubelet[1795]: E0108 21:10:50.452285    1795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.994871  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:51 kubernetes-upgrade-210902 kubelet[1810]: E0108 21:10:51.201842    1810 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.995398  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:51 kubernetes-upgrade-210902 kubelet[1824]: E0108 21:10:51.956982    1824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.995817  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:52 kubernetes-upgrade-210902 kubelet[1838]: E0108 21:10:52.712808    1838 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.996298  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:53 kubernetes-upgrade-210902 kubelet[1850]: E0108 21:10:53.458650    1850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.996738  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:54 kubernetes-upgrade-210902 kubelet[1865]: E0108 21:10:54.197765    1865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.997271  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:54 kubernetes-upgrade-210902 kubelet[1878]: E0108 21:10:54.944887    1878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.997877  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:55 kubernetes-upgrade-210902 kubelet[1893]: E0108 21:10:55.693217    1893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.998289  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:56 kubernetes-upgrade-210902 kubelet[1907]: E0108 21:10:56.445774    1907 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.998704  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:57 kubernetes-upgrade-210902 kubelet[1922]: E0108 21:10:57.199154    1922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.999119  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:57 kubernetes-upgrade-210902 kubelet[1934]: E0108 21:10:57.947527    1934 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:28.999583  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:58 kubernetes-upgrade-210902 kubelet[1949]: E0108 21:10:58.694057    1949 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.000111  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:59 kubernetes-upgrade-210902 kubelet[1961]: E0108 21:10:59.466637    1961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.000570  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1976]: E0108 21:11:00.203063    1976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.000996  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1989]: E0108 21:11:00.945615    1989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.001509  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:01 kubernetes-upgrade-210902 kubelet[2004]: E0108 21:11:01.696353    2004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.002038  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:02 kubernetes-upgrade-210902 kubelet[2017]: E0108 21:11:02.454662    2017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.002529  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2031]: E0108 21:11:03.210934    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.003014  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2043]: E0108 21:11:03.962266    2043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.003432  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:04 kubernetes-upgrade-210902 kubelet[2058]: E0108 21:11:04.694055    2058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.003826  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:05 kubernetes-upgrade-210902 kubelet[2071]: E0108 21:11:05.450370    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.004225  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2085]: E0108 21:11:06.195901    2085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.004596  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2098]: E0108 21:11:06.961189    2098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.004974  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:07 kubernetes-upgrade-210902 kubelet[2116]: E0108 21:11:07.704321    2116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.005493  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:08 kubernetes-upgrade-210902 kubelet[2128]: E0108 21:11:08.447741    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.006066  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2143]: E0108 21:11:09.204880    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.006600  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2156]: E0108 21:11:09.971440    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.007199  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:10 kubernetes-upgrade-210902 kubelet[2170]: E0108 21:11:10.701229    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.007692  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:11 kubernetes-upgrade-210902 kubelet[2183]: E0108 21:11:11.470164    2183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.008052  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2197]: E0108 21:11:12.206854    2197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.008418  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2209]: E0108 21:11:12.946066    2209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.008769  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:13 kubernetes-upgrade-210902 kubelet[2223]: E0108 21:11:13.695518    2223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.009132  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:14 kubernetes-upgrade-210902 kubelet[2236]: E0108 21:11:14.445905    2236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.009485  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.010108  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.010759  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.011400  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.015625  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.016289  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2443]: E0108 21:11:18.945354    2443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.016929  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:19 kubernetes-upgrade-210902 kubelet[2454]: E0108 21:11:19.704297    2454 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.017574  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:20 kubernetes-upgrade-210902 kubelet[2464]: E0108 21:11:20.467902    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.018215  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2474]: E0108 21:11:21.203542    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.018853  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2485]: E0108 21:11:21.955774    2485 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.019525  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:22 kubernetes-upgrade-210902 kubelet[2496]: E0108 21:11:22.696146    2496 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.020166  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:23 kubernetes-upgrade-210902 kubelet[2507]: E0108 21:11:23.460169    2507 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.020806  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2517]: E0108 21:11:24.206181    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.021463  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2527]: E0108 21:11:24.965228    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.022107  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.022746  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.023388  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.024038  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.024684  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:29.024897  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:29.024923  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:29.047459  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:29.047525  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:11:29.118645  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:11:29.118679  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:11:29.118692  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:11:29.173839  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:11:29.173877  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:29.200453  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:29.200478  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:11:29.200607  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:11:29.200622  181838 out.go:239]   Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.200627  181838 out.go:239]   Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.200632  181838 out.go:239]   Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.200638  181838 out.go:239]   Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:29.200643  181838 out.go:239]   Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:29.200648  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:29.200653  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:11:39.201710  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:39.617595  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:39.617663  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:39.644204  181838 cri.go:87] found id: ""
	I0108 21:11:39.644230  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.644240  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:11:39.644247  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:11:39.644297  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:39.674984  181838 cri.go:87] found id: ""
	I0108 21:11:39.675051  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.675065  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:11:39.675077  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:11:39.675125  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:39.700423  181838 cri.go:87] found id: ""
	I0108 21:11:39.700447  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.700453  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:11:39.700459  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:39.700499  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:39.724427  181838 cri.go:87] found id: ""
	I0108 21:11:39.724463  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.724472  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:11:39.724480  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:39.724535  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:39.749696  181838 cri.go:87] found id: ""
	I0108 21:11:39.749721  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.749727  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:11:39.749734  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:11:39.749779  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:11:39.774172  181838 cri.go:87] found id: ""
	I0108 21:11:39.774200  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.774209  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:11:39.774216  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:11:39.774259  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:11:39.800453  181838 cri.go:87] found id: ""
	I0108 21:11:39.800479  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.800486  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:11:39.800492  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:39.800542  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:39.824062  181838 cri.go:87] found id: ""
	I0108 21:11:39.824084  181838 logs.go:274] 0 containers: []
	W0108 21:11:39.824092  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:11:39.824104  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:11:39.824118  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:39.851629  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:39.851663  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:11:39.868991  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:49 kubernetes-upgrade-210902 kubelet[1783]: E0108 21:10:49.711401    1783 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.869369  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:50 kubernetes-upgrade-210902 kubelet[1795]: E0108 21:10:50.452285    1795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.869725  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:51 kubernetes-upgrade-210902 kubelet[1810]: E0108 21:10:51.201842    1810 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.870080  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:51 kubernetes-upgrade-210902 kubelet[1824]: E0108 21:10:51.956982    1824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.870433  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:52 kubernetes-upgrade-210902 kubelet[1838]: E0108 21:10:52.712808    1838 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.870791  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:53 kubernetes-upgrade-210902 kubelet[1850]: E0108 21:10:53.458650    1850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.871152  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:54 kubernetes-upgrade-210902 kubelet[1865]: E0108 21:10:54.197765    1865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.871545  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:54 kubernetes-upgrade-210902 kubelet[1878]: E0108 21:10:54.944887    1878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.871901  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:55 kubernetes-upgrade-210902 kubelet[1893]: E0108 21:10:55.693217    1893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.872257  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:56 kubernetes-upgrade-210902 kubelet[1907]: E0108 21:10:56.445774    1907 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.872612  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:57 kubernetes-upgrade-210902 kubelet[1922]: E0108 21:10:57.199154    1922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.872961  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:57 kubernetes-upgrade-210902 kubelet[1934]: E0108 21:10:57.947527    1934 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.873359  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:58 kubernetes-upgrade-210902 kubelet[1949]: E0108 21:10:58.694057    1949 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.873817  181838 logs.go:138] Found kubelet problem: Jan 08 21:10:59 kubernetes-upgrade-210902 kubelet[1961]: E0108 21:10:59.466637    1961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.874419  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1976]: E0108 21:11:00.203063    1976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.874772  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1989]: E0108 21:11:00.945615    1989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.875169  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:01 kubernetes-upgrade-210902 kubelet[2004]: E0108 21:11:01.696353    2004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.875604  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:02 kubernetes-upgrade-210902 kubelet[2017]: E0108 21:11:02.454662    2017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.875959  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2031]: E0108 21:11:03.210934    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.876312  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2043]: E0108 21:11:03.962266    2043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.876688  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:04 kubernetes-upgrade-210902 kubelet[2058]: E0108 21:11:04.694055    2058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.877067  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:05 kubernetes-upgrade-210902 kubelet[2071]: E0108 21:11:05.450370    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.877421  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2085]: E0108 21:11:06.195901    2085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.877769  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2098]: E0108 21:11:06.961189    2098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.878154  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:07 kubernetes-upgrade-210902 kubelet[2116]: E0108 21:11:07.704321    2116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.878505  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:08 kubernetes-upgrade-210902 kubelet[2128]: E0108 21:11:08.447741    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.878857  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2143]: E0108 21:11:09.204880    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.879228  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2156]: E0108 21:11:09.971440    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.879611  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:10 kubernetes-upgrade-210902 kubelet[2170]: E0108 21:11:10.701229    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.879967  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:11 kubernetes-upgrade-210902 kubelet[2183]: E0108 21:11:11.470164    2183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.880368  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2197]: E0108 21:11:12.206854    2197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.880750  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2209]: E0108 21:11:12.946066    2209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.881133  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:13 kubernetes-upgrade-210902 kubelet[2223]: E0108 21:11:13.695518    2223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.881522  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:14 kubernetes-upgrade-210902 kubelet[2236]: E0108 21:11:14.445905    2236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.881909  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.882291  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.882683  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.883068  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.883602  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.884059  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2443]: E0108 21:11:18.945354    2443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.884475  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:19 kubernetes-upgrade-210902 kubelet[2454]: E0108 21:11:19.704297    2454 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.884860  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:20 kubernetes-upgrade-210902 kubelet[2464]: E0108 21:11:20.467902    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.885254  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2474]: E0108 21:11:21.203542    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.885637  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2485]: E0108 21:11:21.955774    2485 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.886020  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:22 kubernetes-upgrade-210902 kubelet[2496]: E0108 21:11:22.696146    2496 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.886410  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:23 kubernetes-upgrade-210902 kubelet[2507]: E0108 21:11:23.460169    2507 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.886791  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2517]: E0108 21:11:24.206181    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.887178  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2527]: E0108 21:11:24.965228    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.887610  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.888140  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.888559  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.888956  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.889337  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.889741  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:29 kubernetes-upgrade-210902 kubelet[2718]: E0108 21:11:29.472298    2718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.890128  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2728]: E0108 21:11:30.205639    2728 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.890522  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2739]: E0108 21:11:30.991580    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.890908  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:31 kubernetes-upgrade-210902 kubelet[2748]: E0108 21:11:31.694738    2748 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.891304  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:32 kubernetes-upgrade-210902 kubelet[2758]: E0108 21:11:32.465365    2758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.891728  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2769]: E0108 21:11:33.209351    2769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.892116  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2780]: E0108 21:11:33.946681    2780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.892512  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:34 kubernetes-upgrade-210902 kubelet[2790]: E0108 21:11:34.712452    2790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.893000  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:35 kubernetes-upgrade-210902 kubelet[2801]: E0108 21:11:35.473770    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.893547  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.894137  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.894714  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.895175  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:39.895576  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:39.895697  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:39.895713  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:39.912250  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:39.912280  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:11:39.979315  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:11:39.979340  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:11:39.979355  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:11:40.022819  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:40.022857  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:11:40.023002  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:11:40.023021  181838 out.go:239]   Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:40.023031  181838 out.go:239]   Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:40.023038  181838 out.go:239]   Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:40.023048  181838 out.go:239]   Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:40.023058  181838 out.go:239]   Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:40.023068  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:40.023080  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:11:50.025145  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:11:50.117331  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:11:50.117416  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:11:50.145717  181838 cri.go:87] found id: ""
	I0108 21:11:50.145737  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.145744  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:11:50.145754  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:11:50.145808  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:11:50.175024  181838 cri.go:87] found id: ""
	I0108 21:11:50.175048  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.175056  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:11:50.175064  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:11:50.175117  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:11:50.208601  181838 cri.go:87] found id: ""
	I0108 21:11:50.208625  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.208635  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:11:50.208642  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:11:50.208696  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:11:50.237375  181838 cri.go:87] found id: ""
	I0108 21:11:50.237403  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.237410  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:11:50.237416  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:11:50.237458  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:11:50.266015  181838 cri.go:87] found id: ""
	I0108 21:11:50.266039  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.266048  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:11:50.266055  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:11:50.266106  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:11:50.300897  181838 cri.go:87] found id: ""
	I0108 21:11:50.300934  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.300945  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:11:50.300954  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:11:50.301006  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:11:50.332713  181838 cri.go:87] found id: ""
	I0108 21:11:50.332737  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.332746  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:11:50.332754  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:11:50.332816  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:11:50.366668  181838 cri.go:87] found id: ""
	I0108 21:11:50.366694  181838 logs.go:274] 0 containers: []
	W0108 21:11:50.366703  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:11:50.366714  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:11:50.366726  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:11:50.386722  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1976]: E0108 21:11:00.203063    1976 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.387107  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:00 kubernetes-upgrade-210902 kubelet[1989]: E0108 21:11:00.945615    1989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.387604  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:01 kubernetes-upgrade-210902 kubelet[2004]: E0108 21:11:01.696353    2004 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.388184  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:02 kubernetes-upgrade-210902 kubelet[2017]: E0108 21:11:02.454662    2017 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.388787  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2031]: E0108 21:11:03.210934    2031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.389404  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:03 kubernetes-upgrade-210902 kubelet[2043]: E0108 21:11:03.962266    2043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.390026  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:04 kubernetes-upgrade-210902 kubelet[2058]: E0108 21:11:04.694055    2058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.390653  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:05 kubernetes-upgrade-210902 kubelet[2071]: E0108 21:11:05.450370    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.391255  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2085]: E0108 21:11:06.195901    2085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.391919  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:06 kubernetes-upgrade-210902 kubelet[2098]: E0108 21:11:06.961189    2098 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.392475  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:07 kubernetes-upgrade-210902 kubelet[2116]: E0108 21:11:07.704321    2116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.393097  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:08 kubernetes-upgrade-210902 kubelet[2128]: E0108 21:11:08.447741    2128 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.393679  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2143]: E0108 21:11:09.204880    2143 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.394202  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:09 kubernetes-upgrade-210902 kubelet[2156]: E0108 21:11:09.971440    2156 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.394698  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:10 kubernetes-upgrade-210902 kubelet[2170]: E0108 21:11:10.701229    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.395085  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:11 kubernetes-upgrade-210902 kubelet[2183]: E0108 21:11:11.470164    2183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.395581  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2197]: E0108 21:11:12.206854    2197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.396126  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2209]: E0108 21:11:12.946066    2209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.396770  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:13 kubernetes-upgrade-210902 kubelet[2223]: E0108 21:11:13.695518    2223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.397350  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:14 kubernetes-upgrade-210902 kubelet[2236]: E0108 21:11:14.445905    2236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.397971  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.398520  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.399120  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.399803  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.400384  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.400936  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2443]: E0108 21:11:18.945354    2443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.401568  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:19 kubernetes-upgrade-210902 kubelet[2454]: E0108 21:11:19.704297    2454 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.402215  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:20 kubernetes-upgrade-210902 kubelet[2464]: E0108 21:11:20.467902    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.402806  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2474]: E0108 21:11:21.203542    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.403412  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2485]: E0108 21:11:21.955774    2485 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.404127  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:22 kubernetes-upgrade-210902 kubelet[2496]: E0108 21:11:22.696146    2496 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.404763  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:23 kubernetes-upgrade-210902 kubelet[2507]: E0108 21:11:23.460169    2507 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.405378  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2517]: E0108 21:11:24.206181    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.405949  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2527]: E0108 21:11:24.965228    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.406418  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.407076  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.407723  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.408393  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.408948  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.409382  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:29 kubernetes-upgrade-210902 kubelet[2718]: E0108 21:11:29.472298    2718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.409762  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2728]: E0108 21:11:30.205639    2728 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.410130  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2739]: E0108 21:11:30.991580    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.410572  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:31 kubernetes-upgrade-210902 kubelet[2748]: E0108 21:11:31.694738    2748 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.411238  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:32 kubernetes-upgrade-210902 kubelet[2758]: E0108 21:11:32.465365    2758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.411907  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2769]: E0108 21:11:33.209351    2769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.412358  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2780]: E0108 21:11:33.946681    2780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.412979  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:34 kubernetes-upgrade-210902 kubelet[2790]: E0108 21:11:34.712452    2790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.413442  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:35 kubernetes-upgrade-210902 kubelet[2801]: E0108 21:11:35.473770    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.414078  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.414716  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.415366  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.416023  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.416600  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.416986  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2989]: E0108 21:11:39.956820    2989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.417358  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:40 kubernetes-upgrade-210902 kubelet[3016]: E0108 21:11:40.700298    3016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.417759  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:41 kubernetes-upgrade-210902 kubelet[3027]: E0108 21:11:41.449781    3027 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.418131  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3038]: E0108 21:11:42.198430    3038 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.418495  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3049]: E0108 21:11:42.949143    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.418947  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:43 kubernetes-upgrade-210902 kubelet[3060]: E0108 21:11:43.703572    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.419436  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:44 kubernetes-upgrade-210902 kubelet[3073]: E0108 21:11:44.472399    3073 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.419835  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3083]: E0108 21:11:45.206857    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.420271  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3094]: E0108 21:11:45.955227    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.420660  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.421052  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.421424  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.421811  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.422181  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:50.422315  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:11:50.422338  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:11:50.446599  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:11:50.446641  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:11:50.535747  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:11:50.535774  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:11:50.535788  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:11:50.585461  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:11:50.585492  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:11:50.618574  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:50.618660  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:11:50.618813  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:11:50.618858  181838 out.go:239]   Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.618883  181838 out.go:239]   Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.618931  181838 out.go:239]   Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.618969  181838 out.go:239]   Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:11:50.619042  181838 out.go:239]   Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:11:50.619071  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:50.619093  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:00.620993  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:12:01.117411  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:12:01.117490  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:12:01.148594  181838 cri.go:87] found id: ""
	I0108 21:12:01.148620  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.148628  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:12:01.148637  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:12:01.148693  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:12:01.177216  181838 cri.go:87] found id: ""
	I0108 21:12:01.177246  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.177256  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:12:01.177265  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:12:01.177319  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:12:01.205006  181838 cri.go:87] found id: ""
	I0108 21:12:01.205037  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.205045  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:12:01.205053  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:12:01.205113  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:12:01.235690  181838 cri.go:87] found id: ""
	I0108 21:12:01.235718  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.235727  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:12:01.235737  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:12:01.235789  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:12:01.266622  181838 cri.go:87] found id: ""
	I0108 21:12:01.266647  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.266657  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:12:01.266664  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:12:01.266713  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:12:01.294988  181838 cri.go:87] found id: ""
	I0108 21:12:01.295016  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.295025  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:12:01.295038  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:12:01.295088  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:12:01.323978  181838 cri.go:87] found id: ""
	I0108 21:12:01.324017  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.324027  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:12:01.324036  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:12:01.324094  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:12:01.353293  181838 cri.go:87] found id: ""
	I0108 21:12:01.353332  181838 logs.go:274] 0 containers: []
	W0108 21:12:01.353342  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:12:01.353355  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:12:01.353372  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:12:01.371038  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:12:01.371077  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:12:01.437333  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:12:01.437372  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:12:01.437389  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:12:01.493798  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:12:01.493850  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:12:01.528180  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:12:01.528210  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:12:01.548954  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:11 kubernetes-upgrade-210902 kubelet[2183]: E0108 21:11:11.470164    2183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.549545  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2197]: E0108 21:11:12.206854    2197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.550128  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:12 kubernetes-upgrade-210902 kubelet[2209]: E0108 21:11:12.946066    2209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.550703  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:13 kubernetes-upgrade-210902 kubelet[2223]: E0108 21:11:13.695518    2223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.551278  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:14 kubernetes-upgrade-210902 kubelet[2236]: E0108 21:11:14.445905    2236 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.551907  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2250]: E0108 21:11:15.242534    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.552523  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:15 kubernetes-upgrade-210902 kubelet[2260]: E0108 21:11:15.962554    2260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.553054  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:16 kubernetes-upgrade-210902 kubelet[2274]: E0108 21:11:16.708177    2274 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.553576  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:17 kubernetes-upgrade-210902 kubelet[2288]: E0108 21:11:17.450151    2288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.554213  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2315]: E0108 21:11:18.213302    2315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.554761  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:18 kubernetes-upgrade-210902 kubelet[2443]: E0108 21:11:18.945354    2443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.555222  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:19 kubernetes-upgrade-210902 kubelet[2454]: E0108 21:11:19.704297    2454 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.555733  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:20 kubernetes-upgrade-210902 kubelet[2464]: E0108 21:11:20.467902    2464 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.556322  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2474]: E0108 21:11:21.203542    2474 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.556826  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2485]: E0108 21:11:21.955774    2485 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.557224  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:22 kubernetes-upgrade-210902 kubelet[2496]: E0108 21:11:22.696146    2496 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.557783  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:23 kubernetes-upgrade-210902 kubelet[2507]: E0108 21:11:23.460169    2507 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.558295  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2517]: E0108 21:11:24.206181    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.558817  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2527]: E0108 21:11:24.965228    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.559425  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.559986  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.560540  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.561034  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.561467  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.561918  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:29 kubernetes-upgrade-210902 kubelet[2718]: E0108 21:11:29.472298    2718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.562304  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2728]: E0108 21:11:30.205639    2728 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.562864  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2739]: E0108 21:11:30.991580    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.563240  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:31 kubernetes-upgrade-210902 kubelet[2748]: E0108 21:11:31.694738    2748 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.563676  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:32 kubernetes-upgrade-210902 kubelet[2758]: E0108 21:11:32.465365    2758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.564053  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2769]: E0108 21:11:33.209351    2769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.564439  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2780]: E0108 21:11:33.946681    2780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.564855  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:34 kubernetes-upgrade-210902 kubelet[2790]: E0108 21:11:34.712452    2790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.565294  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:35 kubernetes-upgrade-210902 kubelet[2801]: E0108 21:11:35.473770    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.565784  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.566249  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.566860  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.567396  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.567809  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.568265  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2989]: E0108 21:11:39.956820    2989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.568774  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:40 kubernetes-upgrade-210902 kubelet[3016]: E0108 21:11:40.700298    3016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.569305  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:41 kubernetes-upgrade-210902 kubelet[3027]: E0108 21:11:41.449781    3027 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.569905  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3038]: E0108 21:11:42.198430    3038 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.570519  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3049]: E0108 21:11:42.949143    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.571106  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:43 kubernetes-upgrade-210902 kubelet[3060]: E0108 21:11:43.703572    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.571650  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:44 kubernetes-upgrade-210902 kubelet[3073]: E0108 21:11:44.472399    3073 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.572279  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3083]: E0108 21:11:45.206857    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.572867  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3094]: E0108 21:11:45.955227    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.573379  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.573854  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.574336  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.574862  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.575370  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.575929  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:50 kubernetes-upgrade-210902 kubelet[3259]: E0108 21:11:50.469341    3259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.576446  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3298]: E0108 21:11:51.210878    3298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.577004  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3309]: E0108 21:11:51.950966    3309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.577571  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:52 kubernetes-upgrade-210902 kubelet[3319]: E0108 21:11:52.697299    3319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.578149  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:53 kubernetes-upgrade-210902 kubelet[3329]: E0108 21:11:53.448757    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.578639  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3340]: E0108 21:11:54.205105    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.579213  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3351]: E0108 21:11:54.959166    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.579750  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:55 kubernetes-upgrade-210902 kubelet[3364]: E0108 21:11:55.696729    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.580228  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:56 kubernetes-upgrade-210902 kubelet[3375]: E0108 21:11:56.454004    3375 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.580729  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3385]: E0108 21:11:57.209190    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.581359  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.581953  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.582551  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.582986  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.583502  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:01.583703  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:01.583719  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:12:01.583869  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:12:01.583885  181838 out.go:239]   Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.583892  181838 out.go:239]   Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.583900  181838 out.go:239]   Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.583909  181838 out.go:239]   Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:01.583916  181838 out.go:239]   Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:01.583922  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:01.583927  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:11.584942  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:12:11.617298  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:12:11.617359  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:12:11.644750  181838 cri.go:87] found id: ""
	I0108 21:12:11.644775  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.644782  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:12:11.644788  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:12:11.644837  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:12:11.669294  181838 cri.go:87] found id: ""
	I0108 21:12:11.669322  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.669331  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:12:11.669337  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:12:11.669390  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:12:11.692503  181838 cri.go:87] found id: ""
	I0108 21:12:11.692531  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.692540  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:12:11.692548  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:12:11.692600  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:12:11.716388  181838 cri.go:87] found id: ""
	I0108 21:12:11.716416  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.716424  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:12:11.716439  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:12:11.716496  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:12:11.740692  181838 cri.go:87] found id: ""
	I0108 21:12:11.740713  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.740721  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:12:11.740729  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:12:11.740782  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:12:11.766651  181838 cri.go:87] found id: ""
	I0108 21:12:11.766677  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.766686  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:12:11.766694  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:12:11.766750  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:12:11.790312  181838 cri.go:87] found id: ""
	I0108 21:12:11.790335  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.790349  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:12:11.790357  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:12:11.790412  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:12:11.813754  181838 cri.go:87] found id: ""
	I0108 21:12:11.813781  181838 logs.go:274] 0 containers: []
	W0108 21:12:11.813791  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:12:11.813802  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:12:11.813816  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:12:11.829309  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:21 kubernetes-upgrade-210902 kubelet[2485]: E0108 21:11:21.955774    2485 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.829953  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:22 kubernetes-upgrade-210902 kubelet[2496]: E0108 21:11:22.696146    2496 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.830600  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:23 kubernetes-upgrade-210902 kubelet[2507]: E0108 21:11:23.460169    2507 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.831246  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2517]: E0108 21:11:24.206181    2517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.831936  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:24 kubernetes-upgrade-210902 kubelet[2527]: E0108 21:11:24.965228    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.832443  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:25 kubernetes-upgrade-210902 kubelet[2536]: E0108 21:11:25.702253    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.832855  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:26 kubernetes-upgrade-210902 kubelet[2547]: E0108 21:11:26.456785    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.833280  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2558]: E0108 21:11:27.194490    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.833665  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:27 kubernetes-upgrade-210902 kubelet[2568]: E0108 21:11:27.984507    2568 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.834134  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:28 kubernetes-upgrade-210902 kubelet[2591]: E0108 21:11:28.722407    2591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.834756  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:29 kubernetes-upgrade-210902 kubelet[2718]: E0108 21:11:29.472298    2718 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.835369  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2728]: E0108 21:11:30.205639    2728 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.835890  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:30 kubernetes-upgrade-210902 kubelet[2739]: E0108 21:11:30.991580    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.836248  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:31 kubernetes-upgrade-210902 kubelet[2748]: E0108 21:11:31.694738    2748 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.836618  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:32 kubernetes-upgrade-210902 kubelet[2758]: E0108 21:11:32.465365    2758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.836984  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2769]: E0108 21:11:33.209351    2769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.837335  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2780]: E0108 21:11:33.946681    2780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.837729  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:34 kubernetes-upgrade-210902 kubelet[2790]: E0108 21:11:34.712452    2790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.838104  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:35 kubernetes-upgrade-210902 kubelet[2801]: E0108 21:11:35.473770    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.838465  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.838824  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.839191  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.839612  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.839973  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.840325  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2989]: E0108 21:11:39.956820    2989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.840681  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:40 kubernetes-upgrade-210902 kubelet[3016]: E0108 21:11:40.700298    3016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.841033  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:41 kubernetes-upgrade-210902 kubelet[3027]: E0108 21:11:41.449781    3027 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.841387  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3038]: E0108 21:11:42.198430    3038 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.841748  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3049]: E0108 21:11:42.949143    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.842102  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:43 kubernetes-upgrade-210902 kubelet[3060]: E0108 21:11:43.703572    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.842451  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:44 kubernetes-upgrade-210902 kubelet[3073]: E0108 21:11:44.472399    3073 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.842810  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3083]: E0108 21:11:45.206857    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.843162  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3094]: E0108 21:11:45.955227    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.843537  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.843909  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.844259  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.844616  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.844981  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.845404  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:50 kubernetes-upgrade-210902 kubelet[3259]: E0108 21:11:50.469341    3259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.845764  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3298]: E0108 21:11:51.210878    3298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.846142  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3309]: E0108 21:11:51.950966    3309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.846560  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:52 kubernetes-upgrade-210902 kubelet[3319]: E0108 21:11:52.697299    3319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.846917  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:53 kubernetes-upgrade-210902 kubelet[3329]: E0108 21:11:53.448757    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.847267  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3340]: E0108 21:11:54.205105    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.847640  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3351]: E0108 21:11:54.959166    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.847993  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:55 kubernetes-upgrade-210902 kubelet[3364]: E0108 21:11:55.696729    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.848343  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:56 kubernetes-upgrade-210902 kubelet[3375]: E0108 21:11:56.454004    3375 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.848696  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3385]: E0108 21:11:57.209190    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.849065  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.849419  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.849772  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.850275  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.850760  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.851123  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:01 kubernetes-upgrade-210902 kubelet[3580]: E0108 21:12:01.706880    3580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.851504  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:02 kubernetes-upgrade-210902 kubelet[3590]: E0108 21:12:02.455397    3590 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.851872  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3600]: E0108 21:12:03.207058    3600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.852264  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3610]: E0108 21:12:03.947376    3610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.852620  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:04 kubernetes-upgrade-210902 kubelet[3621]: E0108 21:12:04.752983    3621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.852980  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:05 kubernetes-upgrade-210902 kubelet[3631]: E0108 21:12:05.445388    3631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.853332  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3642]: E0108 21:12:06.201494    3642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.853702  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3653]: E0108 21:12:06.956220    3653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.854177  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:07 kubernetes-upgrade-210902 kubelet[3663]: E0108 21:12:07.698729    3663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.854757  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.855154  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.855630  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.856190  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.856554  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:11.856674  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:12:11.856690  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:12:11.871448  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:12:11.871506  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:12:11.930960  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:12:11.930986  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:12:11.931001  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:12:11.967095  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:12:11.967128  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:12:11.995802  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:11.995829  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:12:11.995969  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:12:11.995989  181838 out.go:239]   Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.995996  181838 out.go:239]   Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.996002  181838 out.go:239]   Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.996009  181838 out.go:239]   Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:11.996016  181838 out.go:239]   Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:11.996022  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:11.996030  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:21.997153  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:12:22.117479  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:12:22.117553  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:12:22.141894  181838 cri.go:87] found id: ""
	I0108 21:12:22.141920  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.141926  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:12:22.141932  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:12:22.141991  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:12:22.165753  181838 cri.go:87] found id: ""
	I0108 21:12:22.165781  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.165787  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:12:22.165796  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:12:22.165848  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:12:22.190105  181838 cri.go:87] found id: ""
	I0108 21:12:22.190130  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.190136  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:12:22.190143  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:12:22.190192  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:12:22.215518  181838 cri.go:87] found id: ""
	I0108 21:12:22.215543  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.215551  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:12:22.215560  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:12:22.215616  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:12:22.238852  181838 cri.go:87] found id: ""
	I0108 21:12:22.238879  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.238888  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:12:22.238896  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:12:22.238953  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:12:22.262396  181838 cri.go:87] found id: ""
	I0108 21:12:22.262420  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.262426  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:12:22.262432  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:12:22.262480  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:12:22.286540  181838 cri.go:87] found id: ""
	I0108 21:12:22.286563  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.286570  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:12:22.286576  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:12:22.286619  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:12:22.313762  181838 cri.go:87] found id: ""
	I0108 21:12:22.313791  181838 logs.go:274] 0 containers: []
	W0108 21:12:22.313800  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:12:22.313812  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:12:22.313828  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:12:22.329544  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:32 kubernetes-upgrade-210902 kubelet[2758]: E0108 21:11:32.465365    2758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.329993  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2769]: E0108 21:11:33.209351    2769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.330367  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:33 kubernetes-upgrade-210902 kubelet[2780]: E0108 21:11:33.946681    2780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.330756  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:34 kubernetes-upgrade-210902 kubelet[2790]: E0108 21:11:34.712452    2790 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.331169  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:35 kubernetes-upgrade-210902 kubelet[2801]: E0108 21:11:35.473770    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.331587  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2811]: E0108 21:11:36.212184    2811 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.331969  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:36 kubernetes-upgrade-210902 kubelet[2822]: E0108 21:11:36.948159    2822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.332340  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:37 kubernetes-upgrade-210902 kubelet[2833]: E0108 21:11:37.698498    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.332721  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:38 kubernetes-upgrade-210902 kubelet[2846]: E0108 21:11:38.453515    2846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.333091  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2857]: E0108 21:11:39.198105    2857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.333478  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:39 kubernetes-upgrade-210902 kubelet[2989]: E0108 21:11:39.956820    2989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.333860  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:40 kubernetes-upgrade-210902 kubelet[3016]: E0108 21:11:40.700298    3016 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.334222  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:41 kubernetes-upgrade-210902 kubelet[3027]: E0108 21:11:41.449781    3027 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.334598  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3038]: E0108 21:11:42.198430    3038 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.334971  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3049]: E0108 21:11:42.949143    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.335338  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:43 kubernetes-upgrade-210902 kubelet[3060]: E0108 21:11:43.703572    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.335760  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:44 kubernetes-upgrade-210902 kubelet[3073]: E0108 21:11:44.472399    3073 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.336135  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3083]: E0108 21:11:45.206857    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.336515  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3094]: E0108 21:11:45.955227    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.336889  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.337258  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.337631  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.338027  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.338403  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.338780  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:50 kubernetes-upgrade-210902 kubelet[3259]: E0108 21:11:50.469341    3259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.339146  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3298]: E0108 21:11:51.210878    3298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.339545  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3309]: E0108 21:11:51.950966    3309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.339965  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:52 kubernetes-upgrade-210902 kubelet[3319]: E0108 21:11:52.697299    3319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.340334  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:53 kubernetes-upgrade-210902 kubelet[3329]: E0108 21:11:53.448757    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.340724  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3340]: E0108 21:11:54.205105    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.341092  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3351]: E0108 21:11:54.959166    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.341472  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:55 kubernetes-upgrade-210902 kubelet[3364]: E0108 21:11:55.696729    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.341856  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:56 kubernetes-upgrade-210902 kubelet[3375]: E0108 21:11:56.454004    3375 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.342231  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3385]: E0108 21:11:57.209190    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.342595  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.342983  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.343347  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.343812  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.344200  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.344569  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:01 kubernetes-upgrade-210902 kubelet[3580]: E0108 21:12:01.706880    3580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.344966  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:02 kubernetes-upgrade-210902 kubelet[3590]: E0108 21:12:02.455397    3590 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.345387  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3600]: E0108 21:12:03.207058    3600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.345763  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3610]: E0108 21:12:03.947376    3610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.346149  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:04 kubernetes-upgrade-210902 kubelet[3621]: E0108 21:12:04.752983    3621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.346518  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:05 kubernetes-upgrade-210902 kubelet[3631]: E0108 21:12:05.445388    3631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.346907  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3642]: E0108 21:12:06.201494    3642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.347309  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3653]: E0108 21:12:06.956220    3653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.347729  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:07 kubernetes-upgrade-210902 kubelet[3663]: E0108 21:12:07.698729    3663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.348147  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.348512  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.348909  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.349273  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.349644  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.350015  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3866]: E0108 21:12:12.200568    3866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.350381  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3876]: E0108 21:12:12.954536    3876 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.350765  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:13 kubernetes-upgrade-210902 kubelet[3886]: E0108 21:12:13.697587    3886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.351138  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:14 kubernetes-upgrade-210902 kubelet[3896]: E0108 21:12:14.460710    3896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.351541  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3906]: E0108 21:12:15.206611    3906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.351925  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3916]: E0108 21:12:15.964363    3916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.352296  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:16 kubernetes-upgrade-210902 kubelet[3925]: E0108 21:12:16.716055    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.352694  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:17 kubernetes-upgrade-210902 kubelet[3937]: E0108 21:12:17.496333    3937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.353061  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3947]: E0108 21:12:18.231269    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.353443  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.353821  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.354188  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.354566  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.354961  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:22.355093  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:12:22.355111  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:12:22.370010  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:12:22.370036  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:12:22.426360  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:12:22.426380  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:12:22.426389  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:12:22.461065  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:12:22.461095  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:12:22.488717  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:22.488748  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:12:22.488866  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:12:22.488881  181838 out.go:239]   Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.488889  181838 out.go:239]   Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.488900  181838 out.go:239]   Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.488912  181838 out.go:239]   Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:22.488920  181838 out.go:239]   Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:22.488928  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:22.488940  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:32.489839  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:12:32.617579  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:12:32.617640  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:12:32.647913  181838 cri.go:87] found id: ""
	I0108 21:12:32.647939  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.647947  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:12:32.647955  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:12:32.648005  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:12:32.676330  181838 cri.go:87] found id: ""
	I0108 21:12:32.676351  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.676357  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:12:32.676363  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:12:32.676409  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:12:32.699975  181838 cri.go:87] found id: ""
	I0108 21:12:32.699996  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.700003  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:12:32.700009  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:12:32.700050  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:12:32.727355  181838 cri.go:87] found id: ""
	I0108 21:12:32.727380  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.727389  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:12:32.727396  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:12:32.727449  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:12:32.752900  181838 cri.go:87] found id: ""
	I0108 21:12:32.752930  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.752939  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:12:32.752946  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:12:32.752994  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:12:32.779845  181838 cri.go:87] found id: ""
	I0108 21:12:32.779865  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.779872  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:12:32.779878  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:12:32.779936  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:12:32.802135  181838 cri.go:87] found id: ""
	I0108 21:12:32.802156  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.802162  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:12:32.802168  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:12:32.802208  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:12:32.832559  181838 cri.go:87] found id: ""
	I0108 21:12:32.832584  181838 logs.go:274] 0 containers: []
	W0108 21:12:32.832593  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:12:32.832603  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:12:32.832616  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:12:32.849543  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:42 kubernetes-upgrade-210902 kubelet[3049]: E0108 21:11:42.949143    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.850207  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:43 kubernetes-upgrade-210902 kubelet[3060]: E0108 21:11:43.703572    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.850845  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:44 kubernetes-upgrade-210902 kubelet[3073]: E0108 21:11:44.472399    3073 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.851508  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3083]: E0108 21:11:45.206857    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.852145  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:45 kubernetes-upgrade-210902 kubelet[3094]: E0108 21:11:45.955227    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.852779  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:46 kubernetes-upgrade-210902 kubelet[3105]: E0108 21:11:46.702270    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.853419  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:47 kubernetes-upgrade-210902 kubelet[3116]: E0108 21:11:47.475257    3116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.854058  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3127]: E0108 21:11:48.197769    3127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.854697  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:48 kubernetes-upgrade-210902 kubelet[3138]: E0108 21:11:48.976718    3138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.855334  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:49 kubernetes-upgrade-210902 kubelet[3148]: E0108 21:11:49.709941    3148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.855996  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:50 kubernetes-upgrade-210902 kubelet[3259]: E0108 21:11:50.469341    3259 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.856655  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3298]: E0108 21:11:51.210878    3298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.857295  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:51 kubernetes-upgrade-210902 kubelet[3309]: E0108 21:11:51.950966    3309 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.857933  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:52 kubernetes-upgrade-210902 kubelet[3319]: E0108 21:11:52.697299    3319 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.858573  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:53 kubernetes-upgrade-210902 kubelet[3329]: E0108 21:11:53.448757    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.859225  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3340]: E0108 21:11:54.205105    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.859934  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3351]: E0108 21:11:54.959166    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.860581  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:55 kubernetes-upgrade-210902 kubelet[3364]: E0108 21:11:55.696729    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.861222  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:56 kubernetes-upgrade-210902 kubelet[3375]: E0108 21:11:56.454004    3375 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.861851  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3385]: E0108 21:11:57.209190    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.862491  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.863126  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.863774  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.864429  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.865064  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.865706  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:01 kubernetes-upgrade-210902 kubelet[3580]: E0108 21:12:01.706880    3580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.866350  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:02 kubernetes-upgrade-210902 kubelet[3590]: E0108 21:12:02.455397    3590 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.866985  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3600]: E0108 21:12:03.207058    3600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.867638  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3610]: E0108 21:12:03.947376    3610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.868277  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:04 kubernetes-upgrade-210902 kubelet[3621]: E0108 21:12:04.752983    3621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.868911  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:05 kubernetes-upgrade-210902 kubelet[3631]: E0108 21:12:05.445388    3631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.869548  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3642]: E0108 21:12:06.201494    3642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.870190  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3653]: E0108 21:12:06.956220    3653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.870829  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:07 kubernetes-upgrade-210902 kubelet[3663]: E0108 21:12:07.698729    3663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.871477  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.872111  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.872753  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.873390  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.874023  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.874665  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3866]: E0108 21:12:12.200568    3866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.875311  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3876]: E0108 21:12:12.954536    3876 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.875762  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:13 kubernetes-upgrade-210902 kubelet[3886]: E0108 21:12:13.697587    3886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.876115  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:14 kubernetes-upgrade-210902 kubelet[3896]: E0108 21:12:14.460710    3896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.876473  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3906]: E0108 21:12:15.206611    3906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.876827  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3916]: E0108 21:12:15.964363    3916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.877177  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:16 kubernetes-upgrade-210902 kubelet[3925]: E0108 21:12:16.716055    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.877526  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:17 kubernetes-upgrade-210902 kubelet[3937]: E0108 21:12:17.496333    3937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.877874  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3947]: E0108 21:12:18.231269    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.878226  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.878575  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.878929  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.879295  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.879675  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.880023  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:22 kubernetes-upgrade-210902 kubelet[4148]: E0108 21:12:22.696890    4148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.880378  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:23 kubernetes-upgrade-210902 kubelet[4159]: E0108 21:12:23.444989    4159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.880728  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4171]: E0108 21:12:24.202370    4171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.881089  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.881446  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.881804  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.882159  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.882509  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.882861  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.883215  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.883580  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.883934  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.884297  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:32.884654  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:32.884773  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:12:32.884787  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:12:32.899523  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:12:32.899550  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:12:32.968189  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:12:32.968210  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:12:32.968222  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:12:33.003436  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:12:33.003464  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:12:33.032369  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:33.032395  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:12:33.032520  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:12:33.032532  181838 out.go:239]   Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:33.032539  181838 out.go:239]   Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:33.032546  181838 out.go:239]   Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:33.032561  181838 out.go:239]   Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:33.032568  181838 out.go:239]   Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:33.032574  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:33.032582  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:43.033605  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:12:43.116768  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:12:43.116843  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:12:43.142390  181838 cri.go:87] found id: ""
	I0108 21:12:43.142417  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.142427  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:12:43.142436  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:12:43.142492  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:12:43.169381  181838 cri.go:87] found id: ""
	I0108 21:12:43.169407  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.169413  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:12:43.169419  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:12:43.169465  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:12:43.194801  181838 cri.go:87] found id: ""
	I0108 21:12:43.194827  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.194835  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:12:43.194843  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:12:43.194888  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:12:43.219673  181838 cri.go:87] found id: ""
	I0108 21:12:43.219695  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.219703  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:12:43.219711  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:12:43.219762  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:12:43.243211  181838 cri.go:87] found id: ""
	I0108 21:12:43.243232  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.243237  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:12:43.243243  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:12:43.243283  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:12:43.267221  181838 cri.go:87] found id: ""
	I0108 21:12:43.267258  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.267268  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:12:43.267276  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:12:43.267332  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:12:43.291153  181838 cri.go:87] found id: ""
	I0108 21:12:43.291183  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.291193  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:12:43.291200  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:12:43.291242  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:12:43.313962  181838 cri.go:87] found id: ""
	I0108 21:12:43.313989  181838 logs.go:274] 0 containers: []
	W0108 21:12:43.313998  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:12:43.314010  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:12:43.314028  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:12:43.329789  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:53 kubernetes-upgrade-210902 kubelet[3329]: E0108 21:11:53.448757    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.330175  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3340]: E0108 21:11:54.205105    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.330560  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:54 kubernetes-upgrade-210902 kubelet[3351]: E0108 21:11:54.959166    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.330946  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:55 kubernetes-upgrade-210902 kubelet[3364]: E0108 21:11:55.696729    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.331345  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:56 kubernetes-upgrade-210902 kubelet[3375]: E0108 21:11:56.454004    3375 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.331768  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3385]: E0108 21:11:57.209190    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.332170  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:57 kubernetes-upgrade-210902 kubelet[3396]: E0108 21:11:57.952693    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.332580  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:58 kubernetes-upgrade-210902 kubelet[3407]: E0108 21:11:58.696507    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.332954  181838 logs.go:138] Found kubelet problem: Jan 08 21:11:59 kubernetes-upgrade-210902 kubelet[3418]: E0108 21:11:59.446615    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.333335  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3430]: E0108 21:12:00.194543    3430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.333710  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:00 kubernetes-upgrade-210902 kubelet[3443]: E0108 21:12:00.959446    3443 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.334091  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:01 kubernetes-upgrade-210902 kubelet[3580]: E0108 21:12:01.706880    3580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.334469  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:02 kubernetes-upgrade-210902 kubelet[3590]: E0108 21:12:02.455397    3590 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.334851  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3600]: E0108 21:12:03.207058    3600 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.335229  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3610]: E0108 21:12:03.947376    3610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.335649  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:04 kubernetes-upgrade-210902 kubelet[3621]: E0108 21:12:04.752983    3621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.336060  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:05 kubernetes-upgrade-210902 kubelet[3631]: E0108 21:12:05.445388    3631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.336442  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3642]: E0108 21:12:06.201494    3642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.336817  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3653]: E0108 21:12:06.956220    3653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.337198  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:07 kubernetes-upgrade-210902 kubelet[3663]: E0108 21:12:07.698729    3663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.337572  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.337944  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.338320  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.338703  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.339099  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.339565  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3866]: E0108 21:12:12.200568    3866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.339992  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3876]: E0108 21:12:12.954536    3876 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.340375  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:13 kubernetes-upgrade-210902 kubelet[3886]: E0108 21:12:13.697587    3886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.340752  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:14 kubernetes-upgrade-210902 kubelet[3896]: E0108 21:12:14.460710    3896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.341126  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3906]: E0108 21:12:15.206611    3906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.341596  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3916]: E0108 21:12:15.964363    3916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.342029  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:16 kubernetes-upgrade-210902 kubelet[3925]: E0108 21:12:16.716055    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.342449  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:17 kubernetes-upgrade-210902 kubelet[3937]: E0108 21:12:17.496333    3937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.342833  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3947]: E0108 21:12:18.231269    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.343240  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.343636  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.344165  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.344768  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.345360  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.345955  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:22 kubernetes-upgrade-210902 kubelet[4148]: E0108 21:12:22.696890    4148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.346385  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:23 kubernetes-upgrade-210902 kubelet[4159]: E0108 21:12:23.444989    4159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.346750  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4171]: E0108 21:12:24.202370    4171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.347135  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.347530  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.347903  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.348273  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.348641  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.349009  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.349372  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.349734  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.350099  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.350461  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.350826  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.351192  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4430]: E0108 21:12:33.197494    4430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.351600  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4442]: E0108 21:12:33.952611    4442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.351977  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:34 kubernetes-upgrade-210902 kubelet[4453]: E0108 21:12:34.705940    4453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.352360  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.352723  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.353097  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.353462  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.353826  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.354197  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.354562  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.354922  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.355289  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.355716  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.356128  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:43.356265  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:12:43.356284  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:12:43.371331  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:12:43.371358  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:12:43.429224  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:12:43.429251  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:12:43.429270  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:12:43.465351  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:12:43.465381  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:12:43.490561  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:43.490583  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:12:43.490682  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:12:43.490693  181838 out.go:239]   Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.490698  181838 out.go:239]   Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.490705  181838 out.go:239]   Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.490709  181838 out.go:239]   Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:43.490720  181838 out.go:239]   Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:43.490726  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:43.490732  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:53.492594  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:12:53.617261  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:12:53.617339  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:12:53.645674  181838 cri.go:87] found id: ""
	I0108 21:12:53.645696  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.645702  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:12:53.645708  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:12:53.645748  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:12:53.672008  181838 cri.go:87] found id: ""
	I0108 21:12:53.672035  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.672045  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:12:53.672053  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:12:53.672108  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:12:53.703987  181838 cri.go:87] found id: ""
	I0108 21:12:53.704011  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.704021  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:12:53.704029  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:12:53.704082  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:12:53.729592  181838 cri.go:87] found id: ""
	I0108 21:12:53.729615  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.729623  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:12:53.729631  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:12:53.729686  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:12:53.754902  181838 cri.go:87] found id: ""
	I0108 21:12:53.754924  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.754934  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:12:53.754942  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:12:53.754997  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:12:53.780055  181838 cri.go:87] found id: ""
	I0108 21:12:53.780083  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.780092  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:12:53.780100  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:12:53.780148  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:12:53.804932  181838 cri.go:87] found id: ""
	I0108 21:12:53.804959  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.804970  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:12:53.804979  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:12:53.805033  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:12:53.828533  181838 cri.go:87] found id: ""
	I0108 21:12:53.828558  181838 logs.go:274] 0 containers: []
	W0108 21:12:53.828565  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:12:53.828574  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:12:53.828584  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:12:53.867243  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:12:53.867273  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:12:53.895274  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:12:53.895303  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:12:53.912859  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:03 kubernetes-upgrade-210902 kubelet[3610]: E0108 21:12:03.947376    3610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.913457  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:04 kubernetes-upgrade-210902 kubelet[3621]: E0108 21:12:04.752983    3621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.914052  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:05 kubernetes-upgrade-210902 kubelet[3631]: E0108 21:12:05.445388    3631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.914643  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3642]: E0108 21:12:06.201494    3642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.915220  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:06 kubernetes-upgrade-210902 kubelet[3653]: E0108 21:12:06.956220    3653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.915935  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:07 kubernetes-upgrade-210902 kubelet[3663]: E0108 21:12:07.698729    3663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.916521  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:08 kubernetes-upgrade-210902 kubelet[3673]: E0108 21:12:08.446500    3673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.917050  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3684]: E0108 21:12:09.204408    3684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.917457  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:09 kubernetes-upgrade-210902 kubelet[3694]: E0108 21:12:09.952291    3694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.917838  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:10 kubernetes-upgrade-210902 kubelet[3705]: E0108 21:12:10.705327    3705 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.918192  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:11 kubernetes-upgrade-210902 kubelet[3716]: E0108 21:12:11.446611    3716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.918560  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3866]: E0108 21:12:12.200568    3866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.918913  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:12 kubernetes-upgrade-210902 kubelet[3876]: E0108 21:12:12.954536    3876 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.919282  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:13 kubernetes-upgrade-210902 kubelet[3886]: E0108 21:12:13.697587    3886 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.919676  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:14 kubernetes-upgrade-210902 kubelet[3896]: E0108 21:12:14.460710    3896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.920032  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3906]: E0108 21:12:15.206611    3906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.920468  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3916]: E0108 21:12:15.964363    3916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.920824  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:16 kubernetes-upgrade-210902 kubelet[3925]: E0108 21:12:16.716055    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.921179  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:17 kubernetes-upgrade-210902 kubelet[3937]: E0108 21:12:17.496333    3937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.921542  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3947]: E0108 21:12:18.231269    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.921893  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.922325  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.922700  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.923062  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.923420  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.923817  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:22 kubernetes-upgrade-210902 kubelet[4148]: E0108 21:12:22.696890    4148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.924199  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:23 kubernetes-upgrade-210902 kubelet[4159]: E0108 21:12:23.444989    4159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.924552  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4171]: E0108 21:12:24.202370    4171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.924904  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.925285  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.925640  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.925994  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.926350  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.926745  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.927140  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.927565  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.927930  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.928352  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.928816  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.929197  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4430]: E0108 21:12:33.197494    4430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.929576  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4442]: E0108 21:12:33.952611    4442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.929957  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:34 kubernetes-upgrade-210902 kubelet[4453]: E0108 21:12:34.705940    4453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.930344  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.930709  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.931084  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.931563  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.932007  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.932394  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.932819  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.933229  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.933583  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.933951  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.934339  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.934929  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.935576  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.935983  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.936362  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.936976  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.937609  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.938091  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.938450  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.938806  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.939160  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.939566  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.939930  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.940287  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:53.940648  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:53.940768  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:12:53.940785  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:12:53.956687  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:12:53.956715  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:12:54.021113  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:12:54.021143  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:54.021154  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:12:54.021283  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:12:54.021297  181838 out.go:239]   Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:54.021306  181838 out.go:239]   Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:54.021317  181838 out.go:239]   Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:54.021324  181838 out.go:239]   Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:12:54.021336  181838 out.go:239]   Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:12:54.021348  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:54.021358  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:04.023135  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:04.117330  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:04.117409  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:04.141597  181838 cri.go:87] found id: ""
	I0108 21:13:04.141638  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.141650  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:04.141660  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:04.141711  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:04.167715  181838 cri.go:87] found id: ""
	I0108 21:13:04.167739  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.167746  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:04.167754  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:04.167808  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:04.191793  181838 cri.go:87] found id: ""
	I0108 21:13:04.191823  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.191830  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:04.191835  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:04.191882  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:04.217471  181838 cri.go:87] found id: ""
	I0108 21:13:04.217493  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.217499  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:04.217507  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:04.217557  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:04.241482  181838 cri.go:87] found id: ""
	I0108 21:13:04.241504  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.241510  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:04.241517  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:04.241559  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:04.266045  181838 cri.go:87] found id: ""
	I0108 21:13:04.266070  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.266076  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:04.266085  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:04.266125  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:04.289476  181838 cri.go:87] found id: ""
	I0108 21:13:04.289499  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.289508  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:04.289516  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:04.289573  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:04.313137  181838 cri.go:87] found id: ""
	I0108 21:13:04.313160  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.313168  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:04.313181  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:04.313197  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:04.329184  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:14 kubernetes-upgrade-210902 kubelet[3896]: E0108 21:12:14.460710    3896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.329812  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3906]: E0108 21:12:15.206611    3906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.330406  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3916]: E0108 21:12:15.964363    3916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.330988  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:16 kubernetes-upgrade-210902 kubelet[3925]: E0108 21:12:16.716055    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.331582  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:17 kubernetes-upgrade-210902 kubelet[3937]: E0108 21:12:17.496333    3937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.332166  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3947]: E0108 21:12:18.231269    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.332543  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.332900  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.333267  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.333620  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.333983  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.334414  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:22 kubernetes-upgrade-210902 kubelet[4148]: E0108 21:12:22.696890    4148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.334915  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:23 kubernetes-upgrade-210902 kubelet[4159]: E0108 21:12:23.444989    4159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.335271  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4171]: E0108 21:12:24.202370    4171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.335663  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.336013  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.336373  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.336730  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.337081  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.337431  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.337897  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.338425  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.338810  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.339163  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.339660  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.340020  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4430]: E0108 21:12:33.197494    4430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.340381  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4442]: E0108 21:12:33.952611    4442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.340742  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:34 kubernetes-upgrade-210902 kubelet[4453]: E0108 21:12:34.705940    4453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.341116  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.341591  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.341955  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.342337  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.342723  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.343080  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.343444  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.343830  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.344183  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.344537  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.344896  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.345248  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.345604  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.345956  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.346400  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.346884  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.347237  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.347618  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.347973  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.348325  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.348699  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.349050  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.349403  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.349798  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.350323  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.350811  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.351163  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.351609  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.351966  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.352319  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.352692  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.353051  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.353402  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.353759  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.354118  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.354467  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.354819  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.355173  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.355556  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:04.355688  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:04.355704  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:04.370635  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:04.370665  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:04.426551  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:04.426574  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:04.426586  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:04.474170  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:04.474218  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:04.504353  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:04.504380  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:04.504491  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:13:04.504505  181838 out.go:239]   Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504511  181838 out.go:239]   Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504517  181838 out.go:239]   Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504524  181838 out.go:239]   Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504529  181838 out.go:239]   Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:04.504534  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:04.504539  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:14.506067  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:14.616992  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:14.617065  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:14.640908  181838 cri.go:87] found id: ""
	I0108 21:13:14.640931  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.640937  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:14.640943  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:14.640984  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:14.669486  181838 cri.go:87] found id: ""
	I0108 21:13:14.669512  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.669519  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:14.669525  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:14.669578  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:14.693768  181838 cri.go:87] found id: ""
	I0108 21:13:14.693798  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.693806  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:14.693812  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:14.693854  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:14.719500  181838 cri.go:87] found id: ""
	I0108 21:13:14.719528  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.719537  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:14.719545  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:14.719603  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:14.744451  181838 cri.go:87] found id: ""
	I0108 21:13:14.744489  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.744497  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:14.744510  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:14.744556  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:14.770184  181838 cri.go:87] found id: ""
	I0108 21:13:14.770210  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.770217  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:14.770223  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:14.770265  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:14.794087  181838 cri.go:87] found id: ""
	I0108 21:13:14.794113  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.794119  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:14.794125  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:14.794175  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:14.821619  181838 cri.go:87] found id: ""
	I0108 21:13:14.821645  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.821653  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:14.821664  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:14.821678  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:14.837613  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.838156  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.838733  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.839251  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.839882  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.840310  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.840846  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.841297  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.841831  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.842455  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.843013  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.843558  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4430]: E0108 21:12:33.197494    4430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.844135  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4442]: E0108 21:12:33.952611    4442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.844603  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:34 kubernetes-upgrade-210902 kubelet[4453]: E0108 21:12:34.705940    4453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.844966  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.845318  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.845697  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.846066  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.846441  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.846806  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.847178  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.847632  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.847985  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.848338  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.848697  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.849047  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.849407  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.849764  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.850123  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.850482  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.850846  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.851215  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.851638  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.852007  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.852353  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.852733  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.853085  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.853434  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.853788  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.854157  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.854515  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.854866  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.855289  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.855716  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.856099  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.856471  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.856849  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.857224  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.857610  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.857985  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.858362  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.858755  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.859316  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.859923  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.860343  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.860702  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.861055  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.861404  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.861769  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.862125  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.862472  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.862823  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.863174  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.863595  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.863957  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.864304  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.864723  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:14.864841  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:14.864855  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:14.882066  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:14.882092  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:14.940199  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:14.940222  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:14.940235  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:14.981672  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:14.981703  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:15.008917  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:15.008942  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:15.009059  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:13:15.009076  181838 out.go:239]   Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009084  181838 out.go:239]   Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009091  181838 out.go:239]   Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009103  181838 out.go:239]   Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009113  181838 out.go:239]   Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:15.009120  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:15.009131  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:25.010266  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:25.117176  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:25.117234  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:25.140892  181838 cri.go:87] found id: ""
	I0108 21:13:25.140921  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.140930  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:25.140938  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:25.140987  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:25.164294  181838 cri.go:87] found id: ""
	I0108 21:13:25.164323  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.164332  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:25.164339  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:25.164386  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:25.187064  181838 cri.go:87] found id: ""
	I0108 21:13:25.187086  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.187092  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:25.187100  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:25.187138  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:25.210554  181838 cri.go:87] found id: ""
	I0108 21:13:25.210581  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.210591  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:25.210599  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:25.210649  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:25.234681  181838 cri.go:87] found id: ""
	I0108 21:13:25.234709  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.234718  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:25.234727  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:25.234778  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:25.260188  181838 cri.go:87] found id: ""
	I0108 21:13:25.260214  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.260221  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:25.260230  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:25.260281  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:25.283543  181838 cri.go:87] found id: ""
	I0108 21:13:25.283573  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.283581  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:25.283589  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:25.283634  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:25.307407  181838 cri.go:87] found id: ""
	I0108 21:13:25.307431  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.307438  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:25.307447  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:25.307458  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:25.322801  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:25.322826  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:25.377720  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:25.377742  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:25.377754  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:25.416865  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:25.416896  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:25.444675  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:25.444701  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:25.464847  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.465214  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.465565  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.465942  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.466320  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.466684  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.467074  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.467457  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.467887  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.468268  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.468707  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.469087  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.469467  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.469844  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.470223  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.470608  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.471014  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.471430  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.471830  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.472237  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.472657  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.473041  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.473423  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.473786  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.474140  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.474492  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.474846  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.475199  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.475621  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.475983  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.476357  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.476757  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.477110  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.477569  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.478062  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.478444  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.478830  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.479218  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.479633  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.480056  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.480439  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.480815  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.481193  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.481569  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.481953  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.482336  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.482691  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.483043  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.483393  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.483814  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.484169  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.484534  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.484891  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.485262  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.485663  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.486076  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.486445  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.486808  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.487162  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.487546  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.487903  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.488253  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.488601  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.488952  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.489299  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.489663  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490073  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:25.490201  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:25.490212  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:25.490330  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:13:25.490346  181838 out.go:239]   Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490355  181838 out.go:239]   Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490365  181838 out.go:239]   Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490376  181838 out.go:239]   Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490385  181838 out.go:239]   Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:25.490393  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:25.490404  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:35.491744  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:35.616749  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:35.616817  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:35.641700  181838 cri.go:87] found id: ""
	I0108 21:13:35.641722  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.641730  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:35.641736  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:35.641791  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:35.665354  181838 cri.go:87] found id: ""
	I0108 21:13:35.665382  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.665390  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:35.665397  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:35.665445  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:35.688802  181838 cri.go:87] found id: ""
	I0108 21:13:35.688834  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.688844  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:35.688850  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:35.688890  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:35.712661  181838 cri.go:87] found id: ""
	I0108 21:13:35.712690  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.712699  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:35.712708  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:35.712768  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:35.737882  181838 cri.go:87] found id: ""
	I0108 21:13:35.737904  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.737913  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:35.737921  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:35.737974  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:35.763698  181838 cri.go:87] found id: ""
	I0108 21:13:35.763721  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.763728  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:35.763737  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:35.763791  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:35.786667  181838 cri.go:87] found id: ""
	I0108 21:13:35.786688  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.786694  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:35.786700  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:35.786747  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:35.809460  181838 cri.go:87] found id: ""
	I0108 21:13:35.809480  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.809486  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:35.809494  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:35.809510  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:35.836773  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:35.836797  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:35.855656  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.856191  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.856603  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.857020  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.857393  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.857749  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.858115  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.858482  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.858849  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.859393  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.859912  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.860297  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.860651  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.861024  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.861385  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.861738  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.862110  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.862477  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.862870  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.863230  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.863696  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.864052  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.864414  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.864784  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.865137  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.865490  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.865855  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.866219  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.866576  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.866946  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.867338  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.867824  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.868182  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.868537  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.868894  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.869262  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.869620  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.869975  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.870332  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.870716  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.871194  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.871800  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.872384  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.872974  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.873467  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.873834  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.874325  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.874920  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.875386  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.875831  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.876212  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.876565  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.876921  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.877282  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.877680  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.878035  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.878386  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.878742  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.879092  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.879461  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.879863  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.880219  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.880575  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.880932  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.881283  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.881640  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.881996  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:35.882114  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:35.882129  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:35.899296  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:35.899333  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:35.954201  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:35.954227  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:35.954242  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:35.988771  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:35.988797  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:35.988906  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:13:35.988917  181838 out.go:239]   Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988922  181838 out.go:239]   Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988926  181838 out.go:239]   Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988933  181838 out.go:239]   Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988940  181838 out.go:239]   Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:35.988948  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:35.988954  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:45.990303  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:46.117098  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:46.117168  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:46.141382  181838 cri.go:87] found id: ""
	I0108 21:13:46.141416  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.141425  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:46.141432  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:46.141499  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:46.164973  181838 cri.go:87] found id: ""
	I0108 21:13:46.164998  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.165007  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:46.165015  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:46.165066  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:46.189558  181838 cri.go:87] found id: ""
	I0108 21:13:46.189585  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.189594  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:46.189601  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:46.189651  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:46.213759  181838 cri.go:87] found id: ""
	I0108 21:13:46.213786  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.213794  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:46.213802  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:46.213856  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:46.236871  181838 cri.go:87] found id: ""
	I0108 21:13:46.236898  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.236908  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:46.236915  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:46.236961  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:46.262650  181838 cri.go:87] found id: ""
	I0108 21:13:46.262675  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.262683  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:46.262691  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:46.262732  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:46.285635  181838 cri.go:87] found id: ""
	I0108 21:13:46.285667  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.285674  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:46.285680  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:46.285720  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:46.308380  181838 cri.go:87] found id: ""
	I0108 21:13:46.308403  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.308411  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:46.308422  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:46.308435  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:46.333656  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:46.333685  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:46.348670  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.349037  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.349396  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.349752  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.350108  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.350499  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.351013  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.351608  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.351968  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.352362  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.352721  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.353079  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.353433  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.353786  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.354138  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.354487  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.354858  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.355213  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.355604  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.356003  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.356356  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.356713  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.357066  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.357418  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.357774  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.358124  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.358482  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.358848  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.359212  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.359598  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.359951  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.360302  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.360658  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.361024  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.361374  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.361730  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.362144  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.362498  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.362856  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.363228  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.363607  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.363957  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.364307  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.364660  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.365008  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.365357  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.365711  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.366057  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.366406  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.366766  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.367117  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.367478  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.367857  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.368225  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.368578  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.368928  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.369313  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.369673  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.370042  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.370394  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.370753  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.371209  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.371630  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.371984  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.372369  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.372732  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.373095  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:46.373212  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:46.373226  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:46.390527  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:46.390557  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:46.447603  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:46.447625  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:46.447639  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:46.485360  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:46.485386  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:46.485503  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:13:46.485520  181838 out.go:239]   Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485529  181838 out.go:239]   Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485541  181838 out.go:239]   Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485547  181838 out.go:239]   Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485555  181838 out.go:239]   Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:46.485559  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:46.485566  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:56.487053  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:56.617062  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:56.617126  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:56.644257  181838 cri.go:87] found id: ""
	I0108 21:13:56.644283  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.644291  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:56.644297  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:56.644348  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:56.669044  181838 cri.go:87] found id: ""
	I0108 21:13:56.669064  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.669070  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:56.669076  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:56.669120  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:56.692166  181838 cri.go:87] found id: ""
	I0108 21:13:56.692185  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.692191  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:56.692197  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:56.692236  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:56.714838  181838 cri.go:87] found id: ""
	I0108 21:13:56.714859  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.714865  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:56.714870  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:56.714919  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:56.740419  181838 cri.go:87] found id: ""
	I0108 21:13:56.740441  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.740450  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:56.740459  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:56.740538  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:56.767103  181838 cri.go:87] found id: ""
	I0108 21:13:56.767128  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.767135  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:56.767141  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:56.767180  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:56.793172  181838 cri.go:87] found id: ""
	I0108 21:13:56.793196  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.793204  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:56.793212  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:56.793250  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:56.816140  181838 cri.go:87] found id: ""
	I0108 21:13:56.816166  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.816173  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:56.816182  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:56.816194  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:56.833793  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.834382  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.834970  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.835401  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.835789  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.836149  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.836514  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.836884  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.837255  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.837613  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.838019  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.838374  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.838726  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.839096  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.839463  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.839840  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.840195  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.840570  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.840924  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.841274  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.841627  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.841983  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.842353  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.842710  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.843061  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.843409  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.843812  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.844177  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.844533  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.844895  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.845252  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.845608  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.845984  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.846340  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.846695  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.847048  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.847402  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.847781  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.848153  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.848536  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.848899  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.849272  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.849652  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.850013  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.850377  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.850742  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.851265  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.851829  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.852226  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.852601  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.852968  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.853338  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.853716  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.854079  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:46 kubernetes-upgrade-210902 kubelet[6451]: E0108 21:13:46.694345    6451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.854448  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:47 kubernetes-upgrade-210902 kubelet[6461]: E0108 21:13:47.485269    6461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.854819  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6471]: E0108 21:13:48.194061    6471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.855191  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6482]: E0108 21:13:48.947504    6482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.855603  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:49 kubernetes-upgrade-210902 kubelet[6493]: E0108 21:13:49.694384    6493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.856000  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:50 kubernetes-upgrade-210902 kubelet[6504]: E0108 21:13:50.445872    6504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.856372  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6515]: E0108 21:13:51.198152    6515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.856744  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6525]: E0108 21:13:51.945447    6525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.857112  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:52 kubernetes-upgrade-210902 kubelet[6536]: E0108 21:13:52.693871    6536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.857474  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.857847  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.858214  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.858594  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.858957  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:56.859090  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:56.859109  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:56.879005  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:56.879044  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:56.934680  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:56.934709  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:56.934722  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:56.968969  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:56.968997  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:56.994718  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:56.994740  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:56.994837  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:13:56.994849  181838 out.go:239]   Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994854  181838 out.go:239]   Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994859  181838 out.go:239]   Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994865  181838 out.go:239]   Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994871  181838 out.go:239]   Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:56.994875  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:56.994880  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:06.996209  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:14:07.117290  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:14:07.117354  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:14:07.141283  181838 cri.go:87] found id: ""
	I0108 21:14:07.141303  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.141309  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:14:07.141315  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:14:07.141352  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:14:07.164314  181838 cri.go:87] found id: ""
	I0108 21:14:07.164341  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.164351  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:14:07.164358  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:14:07.164399  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:14:07.187028  181838 cri.go:87] found id: ""
	I0108 21:14:07.187057  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.187063  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:14:07.187069  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:14:07.187109  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:14:07.211446  181838 cri.go:87] found id: ""
	I0108 21:14:07.211467  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.211491  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:14:07.211499  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:14:07.211552  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:14:07.236272  181838 cri.go:87] found id: ""
	I0108 21:14:07.236297  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.236305  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:14:07.236312  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:14:07.236367  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:14:07.261316  181838 cri.go:87] found id: ""
	I0108 21:14:07.261339  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.261346  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:14:07.261354  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:14:07.261410  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:14:07.283963  181838 cri.go:87] found id: ""
	I0108 21:14:07.283982  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.283989  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:14:07.283995  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:14:07.284036  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:14:07.307456  181838 cri.go:87] found id: ""
	I0108 21:14:07.307509  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.307519  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:14:07.307532  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:14:07.307547  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:14:07.324202  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.324598  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.324986  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.325356  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.325773  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.326140  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.326498  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.326850  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.327201  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.327588  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.327941  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.328294  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.328648  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.329009  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.329357  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.329802  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.330179  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.330537  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.330890  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.331239  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.331659  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.332014  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.332385  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.332763  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.333124  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.333598  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.333960  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.334311  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.334707  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.335059  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.335413  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.335802  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.336160  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.336536  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.336928  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.337282  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.337643  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.337994  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.338342  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.338694  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:46 kubernetes-upgrade-210902 kubelet[6451]: E0108 21:13:46.694345    6451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.339042  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:47 kubernetes-upgrade-210902 kubelet[6461]: E0108 21:13:47.485269    6461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.339405  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6471]: E0108 21:13:48.194061    6471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.339803  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6482]: E0108 21:13:48.947504    6482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.340155  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:49 kubernetes-upgrade-210902 kubelet[6493]: E0108 21:13:49.694384    6493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.340520  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:50 kubernetes-upgrade-210902 kubelet[6504]: E0108 21:13:50.445872    6504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.340879  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6515]: E0108 21:13:51.198152    6515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.341236  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6525]: E0108 21:13:51.945447    6525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.341589  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:52 kubernetes-upgrade-210902 kubelet[6536]: E0108 21:13:52.693871    6536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.341958  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.342326  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.342680  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.343053  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.343423  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.343903  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6738]: E0108 21:13:57.193172    6738 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.344275  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6749]: E0108 21:13:57.946361    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.344668  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:58 kubernetes-upgrade-210902 kubelet[6761]: E0108 21:13:58.698549    6761 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.345025  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:59 kubernetes-upgrade-210902 kubelet[6773]: E0108 21:13:59.444839    6773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.345375  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6784]: E0108 21:14:00.195304    6784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.345820  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6795]: E0108 21:14:00.946849    6795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.346222  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:01 kubernetes-upgrade-210902 kubelet[6806]: E0108 21:14:01.697208    6806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.346735  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:02 kubernetes-upgrade-210902 kubelet[6818]: E0108 21:14:02.447776    6818 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.347119  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6830]: E0108 21:14:03.200414    6830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.347514  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.347999  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.348362  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.348722  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.349084  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:07.349250  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:14:07.349267  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:14:07.368098  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:14:07.368124  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:14:07.434187  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:14:07.434211  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:14:07.434221  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:14:07.481079  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:14:07.481116  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:14:07.511178  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:07.511204  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:14:07.511309  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:14:07.511326  181838 out.go:239]   Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511334  181838 out.go:239]   Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511341  181838 out.go:239]   Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511350  181838 out.go:239]   Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511357  181838 out.go:239]   Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:07.511365  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:07.511372  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:17.512442  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:14:17.617425  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:14:17.617502  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:14:17.642169  181838 cri.go:87] found id: ""
	I0108 21:14:17.642197  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.642205  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:14:17.642212  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:14:17.642252  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:14:17.666495  181838 cri.go:87] found id: ""
	I0108 21:14:17.666516  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.666522  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:14:17.666528  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:14:17.666567  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:14:17.690914  181838 cri.go:87] found id: ""
	I0108 21:14:17.690933  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.690939  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:14:17.690945  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:14:17.690986  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:14:17.714578  181838 cri.go:87] found id: ""
	I0108 21:14:17.714598  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.714604  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:14:17.714613  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:14:17.714659  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:14:17.737953  181838 cri.go:87] found id: ""
	I0108 21:14:17.737973  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.737980  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:14:17.737988  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:14:17.738032  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:14:17.762248  181838 cri.go:87] found id: ""
	I0108 21:14:17.762269  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.762276  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:14:17.762284  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:14:17.762340  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:14:17.785902  181838 cri.go:87] found id: ""
	I0108 21:14:17.785925  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.785932  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:14:17.785939  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:14:17.785986  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:14:17.809097  181838 cri.go:87] found id: ""
	I0108 21:14:17.809127  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.809136  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:14:17.809207  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:14:17.809246  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:14:17.825478  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.825850  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.826205  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.826563  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.826919  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.827282  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.827695  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.828067  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.828420  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.828772  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.829120  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.829468  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.829823  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.830181  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.830534  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.830887  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.831246  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.831619  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.831975  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.832398  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.832755  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.833131  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.833485  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.833846  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.834222  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.834576  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:46 kubernetes-upgrade-210902 kubelet[6451]: E0108 21:13:46.694345    6451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.834941  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:47 kubernetes-upgrade-210902 kubelet[6461]: E0108 21:13:47.485269    6461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.835298  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6471]: E0108 21:13:48.194061    6471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.835674  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6482]: E0108 21:13:48.947504    6482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.836025  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:49 kubernetes-upgrade-210902 kubelet[6493]: E0108 21:13:49.694384    6493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.836377  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:50 kubernetes-upgrade-210902 kubelet[6504]: E0108 21:13:50.445872    6504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.836732  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6515]: E0108 21:13:51.198152    6515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.837085  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6525]: E0108 21:13:51.945447    6525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.837443  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:52 kubernetes-upgrade-210902 kubelet[6536]: E0108 21:13:52.693871    6536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.837806  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.838158  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.838509  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.838869  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.839258  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.839645  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6738]: E0108 21:13:57.193172    6738 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.840012  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6749]: E0108 21:13:57.946361    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.840366  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:58 kubernetes-upgrade-210902 kubelet[6761]: E0108 21:13:58.698549    6761 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.840724  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:59 kubernetes-upgrade-210902 kubelet[6773]: E0108 21:13:59.444839    6773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.841094  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6784]: E0108 21:14:00.195304    6784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.841452  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6795]: E0108 21:14:00.946849    6795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.841808  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:01 kubernetes-upgrade-210902 kubelet[6806]: E0108 21:14:01.697208    6806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.842196  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:02 kubernetes-upgrade-210902 kubelet[6818]: E0108 21:14:02.447776    6818 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.842548  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6830]: E0108 21:14:03.200414    6830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.842903  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.843254  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.843706  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.844059  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.844411  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.844768  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:07 kubernetes-upgrade-210902 kubelet[7035]: E0108 21:14:07.695296    7035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.845133  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:08 kubernetes-upgrade-210902 kubelet[7047]: E0108 21:14:08.447815    7047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.845544  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:09 kubernetes-upgrade-210902 kubelet[7058]: E0108 21:14:09.198501    7058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.845919  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:09 kubernetes-upgrade-210902 kubelet[7069]: E0108 21:14:09.947510    7069 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.846353  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:10 kubernetes-upgrade-210902 kubelet[7080]: E0108 21:14:10.716470    7080 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.846856  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:11 kubernetes-upgrade-210902 kubelet[7091]: E0108 21:14:11.448711    7091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.847239  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:12 kubernetes-upgrade-210902 kubelet[7103]: E0108 21:14:12.196512    7103 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.847614  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:12 kubernetes-upgrade-210902 kubelet[7114]: E0108 21:14:12.947726    7114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.847985  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:13 kubernetes-upgrade-210902 kubelet[7125]: E0108 21:14:13.695591    7125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.848357  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:14 kubernetes-upgrade-210902 kubelet[7136]: E0108 21:14:14.445907    7136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.848722  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7147]: E0108 21:14:15.197969    7147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.849072  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7159]: E0108 21:14:15.945679    7159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.849486  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:16 kubernetes-upgrade-210902 kubelet[7170]: E0108 21:14:16.696174    7170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.849907  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:17 kubernetes-upgrade-210902 kubelet[7181]: E0108 21:14:17.473919    7181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:17.850058  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:14:17.850074  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:14:17.870317  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:14:17.870344  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:14:17.926427  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:14:17.926447  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:14:17.926457  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:14:17.962577  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:14:17.962607  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:14:17.990544  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:17.990567  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:14:17.990670  181838 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0108 21:14:17.990682  181838 out.go:239]   Jan 08 21:14:14 kubernetes-upgrade-210902 kubelet[7136]: E0108 21:14:14.445907    7136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:14 kubernetes-upgrade-210902 kubelet[7136]: E0108 21:14:14.445907    7136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990688  181838 out.go:239]   Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7147]: E0108 21:14:15.197969    7147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7147]: E0108 21:14:15.197969    7147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990696  181838 out.go:239]   Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7159]: E0108 21:14:15.945679    7159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7159]: E0108 21:14:15.945679    7159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990703  181838 out.go:239]   Jan 08 21:14:16 kubernetes-upgrade-210902 kubelet[7170]: E0108 21:14:16.696174    7170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:16 kubernetes-upgrade-210902 kubelet[7170]: E0108 21:14:16.696174    7170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990707  181838 out.go:239]   Jan 08 21:14:17 kubernetes-upgrade-210902 kubelet[7181]: E0108 21:14:17.473919    7181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Jan 08 21:14:17 kubernetes-upgrade-210902 kubelet[7181]: E0108 21:14:17.473919    7181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:17.990712  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:17.990716  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:27.992616  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:14:28.001516  181838 kubeadm.go:631] restartCluster took 4m10.859106409s
	W0108 21:14:28.001655  181838 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0108 21:14:28.001688  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:14:29.868567  181838 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.866858883s)
	I0108 21:14:29.868617  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:14:29.879623  181838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:14:29.887216  181838 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:14:29.887277  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:14:29.894159  181838 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:14:29.894203  181838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:14:29.927638  181838 kubeadm.go:317] W0108 21:14:29.926890    8509 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:14:29.960107  181838 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:14:30.022728  181838 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:16:26.082331  181838 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 21:16:26.082444  181838 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 21:16:26.085125  181838 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:16:26.085205  181838 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:16:26.085310  181838 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:16:26.085402  181838 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:16:26.085447  181838 kubeadm.go:317] OS: Linux
	I0108 21:16:26.085486  181838 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:16:26.085526  181838 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:16:26.085565  181838 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:16:26.085605  181838 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:16:26.085667  181838 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:16:26.085714  181838 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:16:26.085792  181838 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:16:26.085836  181838 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:16:26.085885  181838 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:16:26.085985  181838 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:16:26.086106  181838 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:16:26.086190  181838 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:16:26.086245  181838 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:16:26.088374  181838 out.go:204]   - Generating certificates and keys ...
	I0108 21:16:26.088447  181838 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:16:26.088534  181838 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:16:26.088602  181838 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:16:26.088652  181838 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:16:26.088714  181838 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:16:26.088780  181838 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:16:26.088837  181838 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:16:26.088890  181838 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:16:26.088957  181838 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:16:26.089016  181838 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:16:26.089052  181838 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:16:26.089096  181838 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:16:26.089175  181838 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:16:26.089224  181838 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:16:26.089287  181838 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:16:26.089337  181838 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:16:26.089444  181838 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:16:26.089536  181838 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:16:26.089588  181838 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:16:26.089681  181838 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:16:26.091233  181838 out.go:204]   - Booting up control plane ...
	I0108 21:16:26.091316  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:16:26.091398  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:16:26.091468  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:16:26.091577  181838 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:16:26.091717  181838 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:16:26.091768  181838 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 21:16:26.091826  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.091993  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092079  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092246  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092302  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092453  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092508  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092681  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092744  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092896  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092902  181838 kubeadm.go:317] 
	I0108 21:16:26.092936  181838 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 21:16:26.092981  181838 kubeadm.go:317] 	timed out waiting for the condition
	I0108 21:16:26.092988  181838 kubeadm.go:317] 
	I0108 21:16:26.093015  181838 kubeadm.go:317] This error is likely caused by:
	I0108 21:16:26.093043  181838 kubeadm.go:317] 	- The kubelet is not running
	I0108 21:16:26.093130  181838 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 21:16:26.093136  181838 kubeadm.go:317] 
	I0108 21:16:26.093260  181838 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 21:16:26.093316  181838 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 21:16:26.093361  181838 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 21:16:26.093375  181838 kubeadm.go:317] 
	I0108 21:16:26.093499  181838 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 21:16:26.093577  181838 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 21:16:26.093651  181838 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0108 21:16:26.093736  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0108 21:16:26.093830  181838 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 21:16:26.093909  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	W0108 21:16:26.094151  181838 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:14:29.926890    8509 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:14:29.926890    8509 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 21:16:26.094195  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:16:27.945693  181838 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.851473031s)
	I0108 21:16:27.945756  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:16:27.955421  181838 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:16:27.955506  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:16:27.962747  181838 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:16:27.962788  181838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:16:27.997191  181838 kubeadm.go:317] W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:16:28.033757  181838 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:16:28.097457  181838 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:18:23.877590  181838 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 21:18:23.877729  181838 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 21:18:23.880688  181838 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:18:23.880765  181838 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:18:23.880880  181838 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:18:23.880936  181838 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:18:23.880969  181838 kubeadm.go:317] OS: Linux
	I0108 21:18:23.881009  181838 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:18:23.881086  181838 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:18:23.881163  181838 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:18:23.881233  181838 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:18:23.881298  181838 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:18:23.881356  181838 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:18:23.881398  181838 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:18:23.881448  181838 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:18:23.881486  181838 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:18:23.881545  181838 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:18:23.881630  181838 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:18:23.881718  181838 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:18:23.881772  181838 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:18:23.883791  181838 out.go:204]   - Generating certificates and keys ...
	I0108 21:18:23.883864  181838 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:18:23.883937  181838 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:18:23.883999  181838 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:18:23.884052  181838 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:18:23.884127  181838 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:18:23.884184  181838 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:18:23.884236  181838 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:18:23.884297  181838 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:18:23.884361  181838 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:18:23.884434  181838 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:18:23.884472  181838 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:18:23.884524  181838 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:18:23.884566  181838 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:18:23.884609  181838 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:18:23.884667  181838 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:18:23.884734  181838 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:18:23.884822  181838 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:18:23.884894  181838 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:18:23.884936  181838 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:18:23.884992  181838 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:18:23.886673  181838 out.go:204]   - Booting up control plane ...
	I0108 21:18:23.886750  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:18:23.886829  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:18:23.886909  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:18:23.886977  181838 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:18:23.887108  181838 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:18:23.887178  181838 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 21:18:23.887245  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.887408  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.887467  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.887664  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.887733  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.887925  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.887988  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.888156  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.888224  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.888408  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.888422  181838 kubeadm.go:317] 
	I0108 21:18:23.888467  181838 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 21:18:23.888516  181838 kubeadm.go:317] 	timed out waiting for the condition
	I0108 21:18:23.888524  181838 kubeadm.go:317] 
	I0108 21:18:23.888551  181838 kubeadm.go:317] This error is likely caused by:
	I0108 21:18:23.888579  181838 kubeadm.go:317] 	- The kubelet is not running
	I0108 21:18:23.888671  181838 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 21:18:23.888679  181838 kubeadm.go:317] 
	I0108 21:18:23.888772  181838 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 21:18:23.888806  181838 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 21:18:23.888831  181838 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 21:18:23.888837  181838 kubeadm.go:317] 
	I0108 21:18:23.888933  181838 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 21:18:23.889026  181838 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 21:18:23.889098  181838 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0108 21:18:23.889207  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0108 21:18:23.889294  181838 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 21:18:23.889416  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0108 21:18:23.889430  181838 kubeadm.go:398] StartCluster complete in 8m6.778484736s
	I0108 21:18:23.889460  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:18:23.889508  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:18:23.915356  181838 cri.go:87] found id: ""
	I0108 21:18:23.915377  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.915382  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:18:23.915388  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:18:23.915439  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:18:23.938566  181838 cri.go:87] found id: ""
	I0108 21:18:23.938594  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.938603  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:18:23.938610  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:18:23.938724  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:18:23.962049  181838 cri.go:87] found id: ""
	I0108 21:18:23.962090  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.962099  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:18:23.962107  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:18:23.962164  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:18:23.985149  181838 cri.go:87] found id: ""
	I0108 21:18:23.985170  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.985175  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:18:23.985186  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:18:23.985226  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:18:24.009725  181838 cri.go:87] found id: ""
	I0108 21:18:24.009750  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.009756  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:18:24.009764  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:18:24.009830  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:18:24.032795  181838 cri.go:87] found id: ""
	I0108 21:18:24.032816  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.032822  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:18:24.032829  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:18:24.032873  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:18:24.055830  181838 cri.go:87] found id: ""
	I0108 21:18:24.055856  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.055864  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:18:24.055873  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:18:24.055926  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:18:24.080022  181838 cri.go:87] found id: ""
	I0108 21:18:24.080045  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.080054  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:18:24.080065  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:18:24.080087  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:18:24.136653  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:18:24.136684  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:18:24.136697  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:18:24.192311  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:18:24.192341  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:18:24.217880  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:18:24.217906  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:18:24.234276  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12477]: E0108 21:17:34.195356   12477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.234678  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12488]: E0108 21:17:34.945977   12488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.235054  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:35 kubernetes-upgrade-210902 kubelet[12499]: E0108 21:17:35.695839   12499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.235430  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:36 kubernetes-upgrade-210902 kubelet[12510]: E0108 21:17:36.446635   12510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.235839  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:37 kubernetes-upgrade-210902 kubelet[12521]: E0108 21:17:37.195551   12521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.236237  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:37 kubernetes-upgrade-210902 kubelet[12532]: E0108 21:17:37.945693   12532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.236651  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:38 kubernetes-upgrade-210902 kubelet[12543]: E0108 21:17:38.697236   12543 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.237025  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:39 kubernetes-upgrade-210902 kubelet[12554]: E0108 21:17:39.446142   12554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.237424  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:40 kubernetes-upgrade-210902 kubelet[12564]: E0108 21:17:40.198497   12564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.237813  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:40 kubernetes-upgrade-210902 kubelet[12575]: E0108 21:17:40.949081   12575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.238187  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:41 kubernetes-upgrade-210902 kubelet[12586]: E0108 21:17:41.700889   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.238573  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:42 kubernetes-upgrade-210902 kubelet[12597]: E0108 21:17:42.447188   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.238957  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:43 kubernetes-upgrade-210902 kubelet[12608]: E0108 21:17:43.196665   12608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.239350  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:43 kubernetes-upgrade-210902 kubelet[12619]: E0108 21:17:43.947683   12619 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.239727  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:44 kubernetes-upgrade-210902 kubelet[12631]: E0108 21:17:44.696205   12631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.240076  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:45 kubernetes-upgrade-210902 kubelet[12643]: E0108 21:17:45.445694   12643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.240424  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:46 kubernetes-upgrade-210902 kubelet[12654]: E0108 21:17:46.194329   12654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.240776  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:46 kubernetes-upgrade-210902 kubelet[12665]: E0108 21:17:46.947126   12665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.241129  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:47 kubernetes-upgrade-210902 kubelet[12677]: E0108 21:17:47.696089   12677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.241474  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:48 kubernetes-upgrade-210902 kubelet[12688]: E0108 21:17:48.447965   12688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.241839  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:49 kubernetes-upgrade-210902 kubelet[12698]: E0108 21:17:49.195891   12698 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.242191  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:49 kubernetes-upgrade-210902 kubelet[12709]: E0108 21:17:49.945540   12709 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.242546  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:50 kubernetes-upgrade-210902 kubelet[12720]: E0108 21:17:50.697305   12720 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.242895  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:51 kubernetes-upgrade-210902 kubelet[12731]: E0108 21:17:51.445397   12731 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.243237  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:52 kubernetes-upgrade-210902 kubelet[12742]: E0108 21:17:52.197802   12742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.243655  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:52 kubernetes-upgrade-210902 kubelet[12754]: E0108 21:17:52.948353   12754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.244011  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:53 kubernetes-upgrade-210902 kubelet[12765]: E0108 21:17:53.695621   12765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.244360  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:54 kubernetes-upgrade-210902 kubelet[12776]: E0108 21:17:54.446114   12776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.244710  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:55 kubernetes-upgrade-210902 kubelet[12788]: E0108 21:17:55.196617   12788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.245053  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:55 kubernetes-upgrade-210902 kubelet[12799]: E0108 21:17:55.945382   12799 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.245402  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:56 kubernetes-upgrade-210902 kubelet[12810]: E0108 21:17:56.697206   12810 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.245753  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:57 kubernetes-upgrade-210902 kubelet[12822]: E0108 21:17:57.464332   12822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.246100  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:58 kubernetes-upgrade-210902 kubelet[12833]: E0108 21:17:58.196068   12833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.246444  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:58 kubernetes-upgrade-210902 kubelet[12844]: E0108 21:17:58.945025   12844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.246790  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:59 kubernetes-upgrade-210902 kubelet[12855]: E0108 21:17:59.695697   12855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.247133  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:00 kubernetes-upgrade-210902 kubelet[12866]: E0108 21:18:00.444458   12866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.247489  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:01 kubernetes-upgrade-210902 kubelet[12877]: E0108 21:18:01.194710   12877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.247897  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:01 kubernetes-upgrade-210902 kubelet[12888]: E0108 21:18:01.946907   12888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.248447  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:02 kubernetes-upgrade-210902 kubelet[12899]: E0108 21:18:02.695252   12899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.248961  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:03 kubernetes-upgrade-210902 kubelet[12910]: E0108 21:18:03.446945   12910 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.249483  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:04 kubernetes-upgrade-210902 kubelet[12921]: E0108 21:18:04.195429   12921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.249972  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:04 kubernetes-upgrade-210902 kubelet[12932]: E0108 21:18:04.944934   12932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.250331  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:05 kubernetes-upgrade-210902 kubelet[12943]: E0108 21:18:05.697334   12943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.250739  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:06 kubernetes-upgrade-210902 kubelet[12955]: E0108 21:18:06.446534   12955 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.251137  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:07 kubernetes-upgrade-210902 kubelet[12966]: E0108 21:18:07.198042   12966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.251562  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:07 kubernetes-upgrade-210902 kubelet[12978]: E0108 21:18:07.944048   12978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.251921  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:08 kubernetes-upgrade-210902 kubelet[12989]: E0108 21:18:08.695660   12989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.252266  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:09 kubernetes-upgrade-210902 kubelet[13000]: E0108 21:18:09.446303   13000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.252614  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:10 kubernetes-upgrade-210902 kubelet[13010]: E0108 21:18:10.197338   13010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.252969  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:10 kubernetes-upgrade-210902 kubelet[13021]: E0108 21:18:10.947025   13021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.253315  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:11 kubernetes-upgrade-210902 kubelet[13033]: E0108 21:18:11.699272   13033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.253670  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:12 kubernetes-upgrade-210902 kubelet[13045]: E0108 21:18:12.446833   13045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.254068  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:13 kubernetes-upgrade-210902 kubelet[13057]: E0108 21:18:13.196374   13057 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.254420  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:13 kubernetes-upgrade-210902 kubelet[13067]: E0108 21:18:13.946024   13067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.254771  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:14 kubernetes-upgrade-210902 kubelet[13078]: E0108 21:18:14.696082   13078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.255145  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:15 kubernetes-upgrade-210902 kubelet[13089]: E0108 21:18:15.447827   13089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.255515  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:16 kubernetes-upgrade-210902 kubelet[13100]: E0108 21:18:16.195195   13100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.256033  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:16 kubernetes-upgrade-210902 kubelet[13111]: E0108 21:18:16.944749   13111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.256460  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:17 kubernetes-upgrade-210902 kubelet[13122]: E0108 21:18:17.695276   13122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.256827  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:18 kubernetes-upgrade-210902 kubelet[13133]: E0108 21:18:18.453924   13133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.257179  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:19 kubernetes-upgrade-210902 kubelet[13144]: E0108 21:18:19.202840   13144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.257535  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:19 kubernetes-upgrade-210902 kubelet[13155]: E0108 21:18:19.945755   13155 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.257885  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:20 kubernetes-upgrade-210902 kubelet[13166]: E0108 21:18:20.696665   13166 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.258228  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:21 kubernetes-upgrade-210902 kubelet[13178]: E0108 21:18:21.447207   13178 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.258574  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:22 kubernetes-upgrade-210902 kubelet[13189]: E0108 21:18:22.197045   13189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.258931  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:22 kubernetes-upgrade-210902 kubelet[13200]: E0108 21:18:22.946229   13200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.259294  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:23 kubernetes-upgrade-210902 kubelet[13212]: E0108 21:18:23.698135   13212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.259411  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:18:24.259430  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0108 21:18:24.292183  181838 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 21:18:24.292232  181838 out.go:239] * 
	* 
	W0108 21:18:24.292425  181838 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:18:24.292451  181838 out.go:239] * 
	* 
	W0108 21:18:24.293250  181838 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:18:24.296481  181838 out.go:177] X Problems detected in kubelet:
	I0108 21:18:24.297997  181838 out.go:177]   Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12477]: E0108 21:17:34.195356   12477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.299429  181838 out.go:177]   Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12488]: E0108 21:17:34.945977   12488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.301053  181838 out.go:177]   Jan 08 21:17:35 kubernetes-upgrade-210902 kubelet[12499]: E0108 21:17:35.695839   12499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.304811  181838 out.go:177] 
	W0108 21:18:24.306808  181838 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:18:24.306912  181838 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 21:18:24.306980  181838 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 21:18:24.309112  181838 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-210902 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-210902 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-210902 version --output=json: exit status 1 (52.09504ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "26",
	    "gitVersion": "v1.26.0",
	    "gitCommit": "b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d",
	    "gitTreeState": "clean",
	    "buildDate": "2022-12-08T19:58:30Z",
	    "goVersion": "go1.19.4",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.7"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-01-08 21:18:24.737269227 +0000 UTC m=+3066.880441520
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-210902
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-210902:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73af3f81908f4481ad7c7ecef72bf9a4b30ae15ca0169277a080ef634426ef19",
	        "Created": "2023-01-08T21:09:07.037538133Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182125,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:09:46.165744026Z",
	            "FinishedAt": "2023-01-08T21:09:44.386848635Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/73af3f81908f4481ad7c7ecef72bf9a4b30ae15ca0169277a080ef634426ef19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73af3f81908f4481ad7c7ecef72bf9a4b30ae15ca0169277a080ef634426ef19/hostname",
	        "HostsPath": "/var/lib/docker/containers/73af3f81908f4481ad7c7ecef72bf9a4b30ae15ca0169277a080ef634426ef19/hosts",
	        "LogPath": "/var/lib/docker/containers/73af3f81908f4481ad7c7ecef72bf9a4b30ae15ca0169277a080ef634426ef19/73af3f81908f4481ad7c7ecef72bf9a4b30ae15ca0169277a080ef634426ef19-json.log",
	        "Name": "/kubernetes-upgrade-210902",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-210902:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-210902",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bbecdab598e65f923afecba35e89bccf11ccc33a4685d30b8816f90780fabb3f-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbecdab598e65f923afecba35e89bccf11ccc33a4685d30b8816f90780fabb3f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbecdab598e65f923afecba35e89bccf11ccc33a4685d30b8816f90780fabb3f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbecdab598e65f923afecba35e89bccf11ccc33a4685d30b8816f90780fabb3f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-210902",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-210902/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-210902",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-210902",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-210902",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5240aee721db3a1dc257e5f024f18c32bf1295537747cccb7121b6df4d66e53e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32971"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5240aee721db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-210902": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "73af3f81908f",
	                        "kubernetes-upgrade-210902"
	                    ],
	                    "NetworkID": "51a67c6b2137cccc7b5abfcfc6af866663f60189dcbea8e8142813a4f8452763",
	                    "EndpointID": "eb36222273eedab7d50d8998bbca613aecc6639a73bc8223bbb8c78904171d31",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-210902 -n kubernetes-upgrade-210902
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-210902 -n kubernetes-upgrade-210902: exit status 2 (354.03256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-210902 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-210902   | kubernetes-upgrade-210902 | jenkins | v1.28.0 | 08 Jan 23 21:09 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-210733      | missing-upgrade-210733    | jenkins | v1.28.0 | 08 Jan 23 21:09 UTC | 08 Jan 23 21:09 UTC |
	| start   | -p auto-210618 --memory=2048   | auto-210618               | jenkins | v1.28.0 | 08 Jan 23 21:09 UTC | 08 Jan 23 21:10 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| start   | -p pause-210943                | pause-210943              | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:10 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| pause   | -p pause-210943                | pause-210943              | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:10 UTC |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| unpause | -p pause-210943                | pause-210943              | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:10 UTC |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| pause   | -p pause-210943                | pause-210943              | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:10 UTC |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| ssh     | -p auto-210618 pgrep -a        | auto-210618               | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:10 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p pause-210943                | pause-210943              | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:10 UTC |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| start   | -p cert-expiration-210725      | cert-expiration-210725    | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:11 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h        |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| profile | list --output json             | minikube                  | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:11 UTC |
	| delete  | -p auto-210618                 | auto-210618               | jenkins | v1.28.0 | 08 Jan 23 21:10 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p kindnet-210619              | kindnet-210619            | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| delete  | -p pause-210943                | pause-210943              | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p cilium-210619 --memory=2048 | cilium-210619             | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:12 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=cilium --driver=docker   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-210725      | cert-expiration-210725    | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p calico-210619 --memory=2048 | calico-210619             | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| ssh     | -p kindnet-210619 pgrep -a     | kindnet-210619            | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p kindnet-210619              | kindnet-210619            | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p enable-default-cni-210619   | enable-default-cni-210619 | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --enable-default-cni=true      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| ssh     | -p cilium-210619 pgrep -a      | cilium-210619             | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-210619   | enable-default-cni-210619 | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | pgrep -a kubelet               |                           |         |         |                     |                     |
	| delete  | -p cilium-210619               | cilium-210619             | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p bridge-210619 --memory=2048 | bridge-210619             | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:13 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=docker   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd |                           |         |         |                     |                     |
	| ssh     | -p bridge-210619 pgrep -a      | bridge-210619             | jenkins | v1.28.0 | 08 Jan 23 21:13 UTC | 08 Jan 23 21:13 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:12:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:12:57.815080  216777 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:12:57.815282  216777 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:57.815294  216777 out.go:309] Setting ErrFile to fd 2...
	I0108 21:12:57.815301  216777 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:12:57.815415  216777 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:12:57.816027  216777 out.go:303] Setting JSON to false
	I0108 21:12:57.818138  216777 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3327,"bootTime":1673209051,"procs":1331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:12:57.818209  216777 start.go:135] virtualization: kvm guest
	I0108 21:12:57.821115  216777 out.go:177] * [bridge-210619] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:12:57.822738  216777 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:12:57.822665  216777 notify.go:220] Checking for updates...
	I0108 21:12:57.825779  216777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:12:57.827565  216777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:12:57.829328  216777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:12:57.831049  216777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:12:57.833161  216777 config.go:180] Loaded profile config "calico-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:12:57.833252  216777 config.go:180] Loaded profile config "enable-default-cni-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:12:57.833355  216777 config.go:180] Loaded profile config "kubernetes-upgrade-210902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:12:57.833414  216777 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:12:57.865753  216777 docker.go:137] docker version: linux-20.10.22
	I0108 21:12:57.865858  216777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:12:57.970374  216777 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:12:57.889038227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:12:57.970484  216777 docker.go:254] overlay module found
	I0108 21:12:57.972720  216777 out.go:177] * Using the docker driver based on user configuration
	I0108 21:12:57.974041  216777 start.go:294] selected driver: docker
	I0108 21:12:57.974050  216777 start.go:838] validating driver "docker" against <nil>
	I0108 21:12:57.974069  216777 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:12:57.974922  216777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:12:58.073553  216777 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:12:57.995404952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:12:58.073669  216777 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 21:12:58.073821  216777 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:12:58.076106  216777 out.go:177] * Using Docker driver with root privileges
	I0108 21:12:58.077552  216777 cni.go:95] Creating CNI manager for "bridge"
	I0108 21:12:58.077571  216777 start_flags.go:312] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:12:58.077587  216777 start_flags.go:317] config:
	{Name:bridge-210619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:bridge-210619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:12:58.079188  216777 out.go:177] * Starting control plane node bridge-210619 in cluster bridge-210619
	I0108 21:12:58.080507  216777 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:12:58.081713  216777 out.go:177] * Pulling base image ...
	I0108 21:12:58.083018  216777 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:12:58.083047  216777 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:12:58.083057  216777 cache.go:57] Caching tarball of preloaded images
	I0108 21:12:58.083068  216777 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:12:58.083278  216777 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:12:58.083297  216777 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:12:58.083410  216777 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/config.json ...
	I0108 21:12:58.083439  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/config.json: {Name:mkd5f4fce4d897fb044044bbda1c39bdc435badf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:12:58.108119  216777 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:12:58.108148  216777 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:12:58.108166  216777 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:12:58.108204  216777 start.go:364] acquiring machines lock for bridge-210619: {Name:mk5d7c0bcdeb32b5619419375b5620066eda9748 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:12:58.108344  216777 start.go:368] acquired machines lock for "bridge-210619" in 117.962µs
	I0108 21:12:58.108374  216777 start.go:93] Provisioning new machine with config: &{Name:bridge-210619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:bridge-210619 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:12:58.108443  216777 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:12:57.760494  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:59.761097  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:58.111409  216777 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0108 21:12:58.111690  216777 start.go:159] libmachine.API.Create for "bridge-210619" (driver="docker")
	I0108 21:12:58.111727  216777 client.go:168] LocalClient.Create starting
	I0108 21:12:58.111813  216777 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem
	I0108 21:12:58.111848  216777 main.go:134] libmachine: Decoding PEM data...
	I0108 21:12:58.111883  216777 main.go:134] libmachine: Parsing certificate...
	I0108 21:12:58.111956  216777 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem
	I0108 21:12:58.111989  216777 main.go:134] libmachine: Decoding PEM data...
	I0108 21:12:58.112009  216777 main.go:134] libmachine: Parsing certificate...
	I0108 21:12:58.112396  216777 cli_runner.go:164] Run: docker network inspect bridge-210619 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:12:58.134766  216777 cli_runner.go:211] docker network inspect bridge-210619 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:12:58.134840  216777 network_create.go:272] running [docker network inspect bridge-210619] to gather additional debugging logs...
	I0108 21:12:58.134862  216777 cli_runner.go:164] Run: docker network inspect bridge-210619
	W0108 21:12:58.159093  216777 cli_runner.go:211] docker network inspect bridge-210619 returned with exit code 1
	I0108 21:12:58.159121  216777 network_create.go:275] error running [docker network inspect bridge-210619]: docker network inspect bridge-210619: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-210619
	I0108 21:12:58.159140  216777 network_create.go:277] output of [docker network inspect bridge-210619]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-210619
	
	** /stderr **
	I0108 21:12:58.159197  216777 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:12:58.185321  216777 network.go:244] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b55bc2878bca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d4:2d:1f:91}}
	I0108 21:12:58.186252  216777 network.go:244] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6ab3f57c56bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:58:4f:a6:4e}}
	I0108 21:12:58.186844  216777 network.go:244] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-c9c7b4f8f7ef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c7:bc:cf:86}}
	I0108 21:12:58.187692  216777 network.go:244] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-51a67c6b2137 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:01:a9:82:8d}}
	I0108 21:12:58.188328  216777 network.go:244] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-ac68185cbd01 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:b1:86:60:bb}}
	I0108 21:12:58.189269  216777 network.go:306] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc00011a0d0] misses:0}
	I0108 21:12:58.189302  216777 network.go:239] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 21:12:58.189312  216777 network_create.go:115] attempt to create docker network bridge-210619 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0108 21:12:58.189354  216777 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-210619 bridge-210619
	I0108 21:12:58.251125  216777 network_create.go:99] docker network bridge-210619 192.168.94.0/24 created
	I0108 21:12:58.251170  216777 kic.go:106] calculated static IP "192.168.94.2" for the "bridge-210619" container
	I0108 21:12:58.251248  216777 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:12:58.277461  216777 cli_runner.go:164] Run: docker volume create bridge-210619 --label name.minikube.sigs.k8s.io=bridge-210619 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:12:58.300846  216777 oci.go:103] Successfully created a docker volume bridge-210619
	I0108 21:12:58.300930  216777 cli_runner.go:164] Run: docker run --rm --name bridge-210619-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-210619 --entrypoint /usr/bin/test -v bridge-210619:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 21:12:58.891013  216777 oci.go:107] Successfully prepared a docker volume bridge-210619
	I0108 21:12:58.891051  216777 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:12:58.891075  216777 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 21:12:58.891153  216777 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-210619:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:13:04.023135  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:04.117330  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:04.117409  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:04.141597  181838 cri.go:87] found id: ""
	I0108 21:13:04.141638  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.141650  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:04.141660  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:04.141711  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:04.167715  181838 cri.go:87] found id: ""
	I0108 21:13:04.167739  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.167746  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:04.167754  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:04.167808  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:04.191793  181838 cri.go:87] found id: ""
	I0108 21:13:04.191823  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.191830  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:04.191835  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:04.191882  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:04.217471  181838 cri.go:87] found id: ""
	I0108 21:13:04.217493  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.217499  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:04.217507  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:04.217557  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:04.241482  181838 cri.go:87] found id: ""
	I0108 21:13:04.241504  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.241510  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:04.241517  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:04.241559  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:04.266045  181838 cri.go:87] found id: ""
	I0108 21:13:04.266070  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.266076  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:04.266085  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:04.266125  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:04.289476  181838 cri.go:87] found id: ""
	I0108 21:13:04.289499  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.289508  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:04.289516  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:04.289573  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:04.313137  181838 cri.go:87] found id: ""
	I0108 21:13:04.313160  181838 logs.go:274] 0 containers: []
	W0108 21:13:04.313168  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:04.313181  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:04.313197  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:04.329184  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:14 kubernetes-upgrade-210902 kubelet[3896]: E0108 21:12:14.460710    3896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.329812  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3906]: E0108 21:12:15.206611    3906 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.330406  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:15 kubernetes-upgrade-210902 kubelet[3916]: E0108 21:12:15.964363    3916 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.330988  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:16 kubernetes-upgrade-210902 kubelet[3925]: E0108 21:12:16.716055    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.331582  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:17 kubernetes-upgrade-210902 kubelet[3937]: E0108 21:12:17.496333    3937 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.332166  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3947]: E0108 21:12:18.231269    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.332543  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:18 kubernetes-upgrade-210902 kubelet[3957]: E0108 21:12:18.959236    3957 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.332900  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:19 kubernetes-upgrade-210902 kubelet[3967]: E0108 21:12:19.705286    3967 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.333267  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:20 kubernetes-upgrade-210902 kubelet[3978]: E0108 21:12:20.454549    3978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.333620  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[3989]: E0108 21:12:21.197865    3989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.333983  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:21 kubernetes-upgrade-210902 kubelet[4000]: E0108 21:12:21.947445    4000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.334414  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:22 kubernetes-upgrade-210902 kubelet[4148]: E0108 21:12:22.696890    4148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.334915  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:23 kubernetes-upgrade-210902 kubelet[4159]: E0108 21:12:23.444989    4159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.335271  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4171]: E0108 21:12:24.202370    4171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.335663  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.336013  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.336373  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.336730  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.337081  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.337431  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.337897  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.338425  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.338810  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.339163  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.339660  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.340020  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4430]: E0108 21:12:33.197494    4430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.340381  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4442]: E0108 21:12:33.952611    4442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.340742  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:34 kubernetes-upgrade-210902 kubelet[4453]: E0108 21:12:34.705940    4453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.341116  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.341591  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.341955  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.342337  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.342723  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.343080  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.343444  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.343830  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.344183  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.344537  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.344896  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.345248  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.345604  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.345956  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.346400  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.346884  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.347237  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.347618  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.347973  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.348325  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.348699  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.349050  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.349403  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.349798  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.350323  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.350811  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.351163  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.351609  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.351966  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.352319  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.352692  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.353051  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.353402  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.353759  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.354118  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.354467  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.354819  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.355173  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.355556  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:04.355688  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:04.355704  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:04.370635  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:04.370665  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:04.426551  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:04.426574  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:04.426586  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:04.474170  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:04.474218  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:04.504353  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:04.504380  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:04.504491  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:13:04.504505  181838 out.go:239]   Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504511  181838 out.go:239]   Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504517  181838 out.go:239]   Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504524  181838 out.go:239]   Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:04.504529  181838 out.go:239]   Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:04.504534  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:04.504539  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:02.260296  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:04.260388  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:04.763171  216777 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-210619:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (5.871950104s)
	I0108 21:13:04.763197  216777 kic.go:188] duration metric: took 5.872121 seconds to extract preloaded images to volume
	W0108 21:13:04.763310  216777 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:13:04.763394  216777 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:13:04.862899  216777 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-210619 --name bridge-210619 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-210619 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-210619 --network bridge-210619 --ip 192.168.94.2 --volume bridge-210619:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 21:13:05.276700  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Running}}
	I0108 21:13:05.306371  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Status}}
	I0108 21:13:05.330903  216777 cli_runner.go:164] Run: docker exec bridge-210619 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:13:05.386254  216777 oci.go:144] the created container "bridge-210619" has a running status.
	I0108 21:13:05.386294  216777 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa...
	I0108 21:13:05.542741  216777 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:13:05.622495  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Status}}
	I0108 21:13:05.650943  216777 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:13:05.650966  216777 kic_runner.go:114] Args: [docker exec --privileged bridge-210619 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:13:05.735052  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Status}}
	I0108 21:13:05.767098  216777 machine.go:88] provisioning docker machine ...
	I0108 21:13:05.767226  216777 ubuntu.go:169] provisioning hostname "bridge-210619"
	I0108 21:13:05.767297  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:05.796625  216777 main.go:134] libmachine: Using SSH client type: native
	I0108 21:13:05.797166  216777 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33007 <nil> <nil>}
	I0108 21:13:05.797190  216777 main.go:134] libmachine: About to run SSH command:
	sudo hostname bridge-210619 && echo "bridge-210619" | sudo tee /etc/hostname
	I0108 21:13:05.924855  216777 main.go:134] libmachine: SSH cmd err, output: <nil>: bridge-210619
	
	I0108 21:13:05.924933  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:05.954664  216777 main.go:134] libmachine: Using SSH client type: native
	I0108 21:13:05.954813  216777 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33007 <nil> <nil>}
	I0108 21:13:05.954832  216777 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-210619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-210619/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-210619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:13:06.071282  216777 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:13:06.071316  216777 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:13:06.071338  216777 ubuntu.go:177] setting up certificates
	I0108 21:13:06.071348  216777 provision.go:83] configureAuth start
	I0108 21:13:06.071414  216777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-210619
	I0108 21:13:06.096396  216777 provision.go:138] copyHostCerts
	I0108 21:13:06.096469  216777 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:13:06.096496  216777 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:13:06.096579  216777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:13:06.096670  216777 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:13:06.096681  216777 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:13:06.096721  216777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:13:06.096785  216777 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:13:06.096796  216777 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:13:06.096831  216777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:13:06.096888  216777 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.bridge-210619 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube bridge-210619]
	I0108 21:13:06.159101  216777 provision.go:172] copyRemoteCerts
	I0108 21:13:06.159149  216777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:13:06.159198  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:06.186395  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:06.278626  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:13:06.295667  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 21:13:06.312826  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:13:06.329744  216777 provision.go:86] duration metric: configureAuth took 258.383369ms
	I0108 21:13:06.329776  216777 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:13:06.329935  216777 config.go:180] Loaded profile config "bridge-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:13:06.329948  216777 machine.go:91] provisioned docker machine in 562.738787ms
	I0108 21:13:06.329956  216777 client.go:171] LocalClient.Create took 8.218215946s
	I0108 21:13:06.329978  216777 start.go:167] duration metric: libmachine.API.Create for "bridge-210619" took 8.218288066s
	I0108 21:13:06.329992  216777 start.go:300] post-start starting for "bridge-210619" (driver="docker")
	I0108 21:13:06.330001  216777 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:13:06.330059  216777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:13:06.330107  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:06.356665  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:06.447222  216777 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:13:06.449912  216777 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:13:06.449942  216777 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:13:06.449959  216777 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:13:06.449967  216777 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:13:06.449975  216777 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:13:06.450026  216777 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:13:06.450099  216777 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:13:06.450196  216777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:13:06.456930  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:13:06.474146  216777 start.go:303] post-start completed in 144.133039ms
	I0108 21:13:06.474523  216777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-210619
	I0108 21:13:06.498751  216777 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/config.json ...
	I0108 21:13:06.498978  216777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:13:06.499013  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:06.523533  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:06.604038  216777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:13:06.607921  216777 start.go:128] duration metric: createHost completed in 8.499466487s
	I0108 21:13:06.607967  216777 start.go:83] releasing machines lock for "bridge-210619", held for 8.499606822s
	I0108 21:13:06.608060  216777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-210619
	I0108 21:13:06.634451  216777 ssh_runner.go:195] Run: cat /version.json
	I0108 21:13:06.634497  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:06.634564  216777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:13:06.634627  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:06.659960  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:06.661312  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:06.743021  216777 ssh_runner.go:195] Run: systemctl --version
	I0108 21:13:06.772451  216777 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:13:06.782550  216777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:13:06.791355  216777 docker.go:189] disabling docker service ...
	I0108 21:13:06.791406  216777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:13:06.807834  216777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:13:06.817102  216777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:13:06.901819  216777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:13:06.985728  216777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:13:06.995056  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:13:07.007216  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:13:07.014890  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:13:07.022699  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:13:07.030260  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0108 21:13:07.037693  216777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:13:07.043987  216777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:13:07.050159  216777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:13:07.123734  216777 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:13:07.200187  216777 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:13:07.200252  216777 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:13:07.204065  216777 start.go:472] Will wait 60s for crictl version
	I0108 21:13:07.204127  216777 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:13:07.231085  216777 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:13:07.231144  216777 ssh_runner.go:195] Run: containerd --version
	I0108 21:13:07.255155  216777 ssh_runner.go:195] Run: containerd --version
	I0108 21:13:07.282043  216777 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:13:07.283525  216777 cli_runner.go:164] Run: docker network inspect bridge-210619 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:13:07.306780  216777 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0108 21:13:07.310016  216777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:13:07.319089  216777 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:13:07.319145  216777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:13:07.343063  216777 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:13:07.343086  216777 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:13:07.343131  216777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:13:07.366983  216777 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:13:07.367004  216777 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:13:07.367044  216777 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:13:07.391421  216777 cni.go:95] Creating CNI manager for "bridge"
	I0108 21:13:07.391447  216777 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:13:07.391461  216777 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-210619 NodeName:bridge-210619 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:13:07.391666  216777 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "bridge-210619"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:13:07.391775  216777 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=bridge-210619 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:bridge-210619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0108 21:13:07.391833  216777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:13:07.400276  216777 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:13:07.400347  216777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:13:07.407868  216777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0108 21:13:07.452268  216777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:13:07.466849  216777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes)
	I0108 21:13:07.479997  216777 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:13:07.483537  216777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:13:07.492993  216777 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619 for IP: 192.168.94.2
	I0108 21:13:07.493087  216777 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:13:07.493141  216777 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:13:07.493181  216777 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.key
	I0108 21:13:07.493196  216777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt with IP's: []
	I0108 21:13:07.616376  216777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt ...
	I0108 21:13:07.616407  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: {Name:mkc6794741f7be06e4fef566153efdcc119dd7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:07.616629  216777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.key ...
	I0108 21:13:07.616645  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.key: {Name:mk2eac91e6c7ce9c6a782cd8d098d36eee34407b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:07.616769  216777 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.key.ad8e880a
	I0108 21:13:07.616787  216777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:13:07.797744  216777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.crt.ad8e880a ...
	I0108 21:13:07.797783  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.crt.ad8e880a: {Name:mkef5e10087ad83cb6554db9503e10e8453e5b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:07.798012  216777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.key.ad8e880a ...
	I0108 21:13:07.798030  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.key.ad8e880a: {Name:mkddf1b3895158a9c458bc5c1360ed05fefb5091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:07.798153  216777 certs.go:320] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.crt
	I0108 21:13:07.798244  216777 certs.go:324] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.key
	I0108 21:13:07.798321  216777 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.key
	I0108 21:13:07.798345  216777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.crt with IP's: []
	I0108 21:13:07.974021  216777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.crt ...
	I0108 21:13:07.974054  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.crt: {Name:mke0a1678b72e0f3021ded17c9945e0a854a105a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:07.974298  216777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.key ...
	I0108 21:13:07.974317  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.key: {Name:mka5682e82ccc35b636572146e618ec06d7d102d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:07.974544  216777 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:13:07.974605  216777 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:13:07.974624  216777 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:13:07.974660  216777 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:13:07.974710  216777 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:13:07.974745  216777 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:13:07.974805  216777 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:13:07.975427  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:13:07.993431  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:13:08.010609  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:13:08.027675  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:13:08.044209  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:13:08.061077  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:13:08.077667  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:13:08.094243  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:13:08.111163  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:13:08.128409  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:13:08.145355  216777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:13:08.162298  216777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:13:08.174861  216777 ssh_runner.go:195] Run: openssl version
	I0108 21:13:08.179567  216777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:13:08.186599  216777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:13:08.189492  216777 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:13:08.189526  216777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:13:08.194593  216777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:13:08.202052  216777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:13:08.209302  216777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:13:08.212282  216777 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:13:08.212317  216777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:13:08.217015  216777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:13:08.224204  216777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:13:08.231197  216777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:13:08.234264  216777 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:13:08.234305  216777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:13:08.239006  216777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:13:08.246095  216777 kubeadm.go:396] StartCluster: {Name:bridge-210619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:bridge-210619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:13:08.246185  216777 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:13:08.246225  216777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:13:08.271579  216777 cri.go:87] found id: ""
	I0108 21:13:08.271637  216777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:13:08.278801  216777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:13:08.285704  216777 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:13:08.285746  216777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:13:08.292345  216777 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:13:08.292378  216777 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:13:08.334444  216777 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:13:08.334521  216777 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:13:08.362737  216777 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:13:08.362806  216777 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:13:08.362840  216777 kubeadm.go:317] OS: Linux
	I0108 21:13:08.362885  216777 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:13:08.362926  216777 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:13:08.362971  216777 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:13:08.363024  216777 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:13:08.363065  216777 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:13:08.363123  216777 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:13:08.363179  216777 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:13:08.363273  216777 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:13:08.363342  216777 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:13:08.431671  216777 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:13:08.431820  216777 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:13:08.431946  216777 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:13:08.554620  216777 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:13:08.558015  216777 out.go:204]   - Generating certificates and keys ...
	I0108 21:13:08.558142  216777 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:13:08.558263  216777 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:13:08.624226  216777 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:13:08.834298  216777 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:13:09.153832  216777 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:13:09.239148  216777 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 21:13:09.403917  216777 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 21:13:09.404085  216777 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [bridge-210619 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0108 21:13:09.546969  216777 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 21:13:09.547143  216777 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [bridge-210619 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0108 21:13:09.746026  216777 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:13:09.940223  216777 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:13:10.246527  216777 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 21:13:10.246632  216777 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:13:10.409235  216777 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:13:10.518820  216777 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:13:10.613986  216777 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:13:10.724379  216777 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:13:10.736252  216777 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:13:10.737150  216777 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:13:10.737250  216777 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:13:10.822597  216777 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:13:06.760313  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:09.260256  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:10.826847  216777 out.go:204]   - Booting up control plane ...
	I0108 21:13:10.826990  216777 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:13:10.827103  216777 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:13:10.827190  216777 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:13:10.827552  216777 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:13:10.830350  216777 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:13:14.506067  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:14.616992  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:14.617065  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:14.640908  181838 cri.go:87] found id: ""
	I0108 21:13:14.640931  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.640937  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:14.640943  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:14.640984  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:14.669486  181838 cri.go:87] found id: ""
	I0108 21:13:14.669512  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.669519  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:14.669525  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:14.669578  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:14.693768  181838 cri.go:87] found id: ""
	I0108 21:13:14.693798  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.693806  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:14.693812  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:14.693854  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:14.719500  181838 cri.go:87] found id: ""
	I0108 21:13:14.719528  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.719537  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:14.719545  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:14.719603  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:14.744451  181838 cri.go:87] found id: ""
	I0108 21:13:14.744489  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.744497  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:14.744510  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:14.744556  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:14.770184  181838 cri.go:87] found id: ""
	I0108 21:13:14.770210  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.770217  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:14.770223  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:14.770265  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:14.794087  181838 cri.go:87] found id: ""
	I0108 21:13:14.794113  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.794119  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:14.794125  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:14.794175  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:14.821619  181838 cri.go:87] found id: ""
	I0108 21:13:14.821645  181838 logs.go:274] 0 containers: []
	W0108 21:13:14.821653  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:14.821664  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:14.821678  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:14.837613  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:24 kubernetes-upgrade-210902 kubelet[4181]: E0108 21:12:24.963635    4181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.838156  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:25 kubernetes-upgrade-210902 kubelet[4191]: E0108 21:12:25.697356    4191 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.838733  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:26 kubernetes-upgrade-210902 kubelet[4202]: E0108 21:12:26.451233    4202 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.839251  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4213]: E0108 21:12:27.200913    4213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.839882  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:27 kubernetes-upgrade-210902 kubelet[4224]: E0108 21:12:27.948464    4224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.840310  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:28 kubernetes-upgrade-210902 kubelet[4234]: E0108 21:12:28.699309    4234 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.840846  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:29 kubernetes-upgrade-210902 kubelet[4245]: E0108 21:12:29.446654    4245 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.841297  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4256]: E0108 21:12:30.195229    4256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.841831  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:30 kubernetes-upgrade-210902 kubelet[4267]: E0108 21:12:30.949224    4267 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.842455  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:31 kubernetes-upgrade-210902 kubelet[4278]: E0108 21:12:31.720874    4278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.843013  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:32 kubernetes-upgrade-210902 kubelet[4288]: E0108 21:12:32.455768    4288 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.843558  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4430]: E0108 21:12:33.197494    4430 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.844135  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:33 kubernetes-upgrade-210902 kubelet[4442]: E0108 21:12:33.952611    4442 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.844603  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:34 kubernetes-upgrade-210902 kubelet[4453]: E0108 21:12:34.705940    4453 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.844966  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.845318  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.845697  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.846066  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.846441  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.846806  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.847178  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.847632  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.847985  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.848338  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.848697  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.849047  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.849407  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.849764  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.850123  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.850482  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.850846  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.851215  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.851638  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.852007  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.852353  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.852733  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.853085  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.853434  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.853788  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.854157  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.854515  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.854866  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.855289  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.855716  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.856099  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.856471  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.856849  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.857224  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.857610  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.857985  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.858362  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.858755  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.859316  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.859923  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.860343  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.860702  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.861055  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.861404  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.861769  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.862125  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.862472  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.862823  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.863174  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.863595  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.863957  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.864304  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:14.864723  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:14.864841  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:14.864855  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:14.882066  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:14.882092  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:14.940199  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:14.940222  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:14.940235  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:14.981672  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:14.981703  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:15.008917  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:15.008942  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:15.009059  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:13:15.009076  181838 out.go:239]   Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009084  181838 out.go:239]   Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009091  181838 out.go:239]   Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009103  181838 out.go:239]   Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:15.009113  181838 out.go:239]   Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:15.009120  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:15.009131  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:11.763874  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:14.260795  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:16.832982  216777 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002671 seconds
	I0108 21:13:16.833119  216777 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:13:16.844190  216777 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:13:17.359583  216777 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:13:17.359778  216777 kubeadm.go:317] [mark-control-plane] Marking the node bridge-210619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:13:17.866994  216777 kubeadm.go:317] [bootstrap-token] Using token: xb30nd.70z1f3ptwlr51nm5
	I0108 21:13:17.868698  216777 out.go:204]   - Configuring RBAC rules ...
	I0108 21:13:17.868877  216777 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:13:17.871544  216777 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:13:17.876306  216777 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:13:17.878401  216777 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:13:17.880084  216777 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:13:17.881940  216777 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:13:17.888845  216777 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:13:18.078164  216777 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:13:18.275220  216777 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:13:18.276184  216777 kubeadm.go:317] 
	I0108 21:13:18.276255  216777 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:13:18.276265  216777 kubeadm.go:317] 
	I0108 21:13:18.276334  216777 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:13:18.276343  216777 kubeadm.go:317] 
	I0108 21:13:18.276364  216777 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:13:18.276413  216777 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:13:18.276456  216777 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:13:18.276462  216777 kubeadm.go:317] 
	I0108 21:13:18.276504  216777 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:13:18.276511  216777 kubeadm.go:317] 
	I0108 21:13:18.276549  216777 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:13:18.276554  216777 kubeadm.go:317] 
	I0108 21:13:18.276595  216777 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:13:18.276660  216777 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:13:18.276716  216777 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:13:18.276722  216777 kubeadm.go:317] 
	I0108 21:13:18.276801  216777 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:13:18.276882  216777 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:13:18.276892  216777 kubeadm.go:317] 
	I0108 21:13:18.276967  216777 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token xb30nd.70z1f3ptwlr51nm5 \
	I0108 21:13:18.277068  216777 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:13:18.277089  216777 kubeadm.go:317] 	--control-plane 
	I0108 21:13:18.277093  216777 kubeadm.go:317] 
	I0108 21:13:18.277162  216777 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:13:18.277168  216777 kubeadm.go:317] 
	I0108 21:13:18.277241  216777 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token xb30nd.70z1f3ptwlr51nm5 \
	I0108 21:13:18.277366  216777 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:13:18.280327  216777 kubeadm.go:317] W0108 21:13:08.326794     740 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:13:18.280599  216777 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:13:18.280750  216777 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:13:18.280777  216777 cni.go:95] Creating CNI manager for "bridge"
	I0108 21:13:18.284103  216777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:13:16.261448  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:18.760312  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:18.285627  216777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:13:18.315986  216777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:13:18.329109  216777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:13:18.329194  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:18.329206  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=bridge-210619 minikube.k8s.io/updated_at=2023_01_08T21_13_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:18.338635  216777 ops.go:34] apiserver oom_adj: -16
	I0108 21:13:18.468877  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:19.052680  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:19.552419  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:20.052738  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:20.552544  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:21.052658  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:21.552697  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:22.053193  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:22.552689  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:25.010266  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:25.117176  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:25.117234  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:25.140892  181838 cri.go:87] found id: ""
	I0108 21:13:25.140921  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.140930  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:25.140938  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:25.140987  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:25.164294  181838 cri.go:87] found id: ""
	I0108 21:13:25.164323  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.164332  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:25.164339  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:25.164386  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:25.187064  181838 cri.go:87] found id: ""
	I0108 21:13:25.187086  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.187092  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:25.187100  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:25.187138  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:25.210554  181838 cri.go:87] found id: ""
	I0108 21:13:25.210581  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.210591  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:25.210599  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:25.210649  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:25.234681  181838 cri.go:87] found id: ""
	I0108 21:13:25.234709  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.234718  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:25.234727  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:25.234778  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:25.260188  181838 cri.go:87] found id: ""
	I0108 21:13:25.260214  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.260221  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:25.260230  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:25.260281  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:25.283543  181838 cri.go:87] found id: ""
	I0108 21:13:25.283573  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.283581  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:25.283589  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:25.283634  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:25.307407  181838 cri.go:87] found id: ""
	I0108 21:13:25.307431  181838 logs.go:274] 0 containers: []
	W0108 21:13:25.307438  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:25.307447  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:25.307458  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:25.322801  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:25.322826  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:25.377720  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:25.377742  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:25.377754  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:21.259989  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:23.260086  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:25.260910  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:23.052636  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:23.552390  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:24.052717  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:24.552689  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:25.052702  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:25.553228  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:26.052388  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:26.553204  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:27.052711  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:27.553360  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:25.416865  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:25.416896  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:25.444675  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:25.444701  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:25.464847  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:35 kubernetes-upgrade-210902 kubelet[4465]: E0108 21:12:35.450199    4465 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.465214  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4476]: E0108 21:12:36.227223    4476 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.465565  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:36 kubernetes-upgrade-210902 kubelet[4487]: E0108 21:12:36.968970    4487 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.465942  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:37 kubernetes-upgrade-210902 kubelet[4498]: E0108 21:12:37.697629    4498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.466320  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:38 kubernetes-upgrade-210902 kubelet[4510]: E0108 21:12:38.466220    4510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.466684  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4519]: E0108 21:12:39.201764    4519 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.467074  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:39 kubernetes-upgrade-210902 kubelet[4530]: E0108 21:12:39.946359    4530 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.467457  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:40 kubernetes-upgrade-210902 kubelet[4541]: E0108 21:12:40.707903    4541 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.467887  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:41 kubernetes-upgrade-210902 kubelet[4553]: E0108 21:12:41.445616    4553 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.468268  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4564]: E0108 21:12:42.202869    4564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.468707  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:42 kubernetes-upgrade-210902 kubelet[4574]: E0108 21:12:42.946836    4574 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.469087  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:43 kubernetes-upgrade-210902 kubelet[4723]: E0108 21:12:43.701425    4723 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.469467  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:44 kubernetes-upgrade-210902 kubelet[4733]: E0108 21:12:44.451142    4733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.469844  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4743]: E0108 21:12:45.201516    4743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.470223  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.470608  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.471014  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.471430  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.471830  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.472237  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.472657  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.473041  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.473423  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.473786  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.474140  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.474492  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.474846  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.475199  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.475621  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.475983  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.476357  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.476757  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.477110  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.477569  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.478062  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.478444  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.478830  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.479218  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.479633  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.480056  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.480439  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.480815  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.481193  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.481569  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.481953  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.482336  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.482691  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.483043  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.483393  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.483814  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.484169  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.484534  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.484891  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.485262  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.485663  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.486076  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.486445  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.486808  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.487162  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.487546  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.487903  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.488253  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.488601  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.488952  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.489299  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.489663  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490073  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:25.490201  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:25.490212  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:25.490330  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:13:25.490346  181838 out.go:239]   Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490355  181838 out.go:239]   Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490365  181838 out.go:239]   Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490376  181838 out.go:239]   Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:25.490385  181838 out.go:239]   Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:25.490393  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:25.490404  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:27.261094  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:29.759810  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:28.052972  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:28.553056  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:29.052523  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:29.552685  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:30.053261  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:30.552682  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:31.052695  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:31.552674  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:32.053348  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:32.552377  216777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:13:32.624002  216777 kubeadm.go:1067] duration metric: took 14.294879349s to wait for elevateKubeSystemPrivileges.
	I0108 21:13:32.624037  216777 kubeadm.go:398] StartCluster complete in 24.377951633s
	I0108 21:13:32.624058  216777 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:32.624163  216777 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:13:32.625561  216777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:13:33.151949  216777 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-210619" rescaled to 1
	I0108 21:13:33.152031  216777 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:13:33.153890  216777 out.go:177] * Verifying Kubernetes components...
	I0108 21:13:33.152191  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:13:33.152222  216777 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 21:13:33.152364  216777 config.go:180] Loaded profile config "bridge-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:13:33.155329  216777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:13:33.155339  216777 addons.go:65] Setting storage-provisioner=true in profile "bridge-210619"
	I0108 21:13:33.155357  216777 addons.go:227] Setting addon storage-provisioner=true in "bridge-210619"
	I0108 21:13:33.155359  216777 addons.go:65] Setting default-storageclass=true in profile "bridge-210619"
	I0108 21:13:33.155387  216777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-210619"
	W0108 21:13:33.155364  216777 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:13:33.155553  216777 host.go:66] Checking if "bridge-210619" exists ...
	I0108 21:13:33.155821  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Status}}
	I0108 21:13:33.155971  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Status}}
	I0108 21:13:33.193302  216777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:13:33.194901  216777 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:13:33.194920  216777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:13:33.194960  216777 addons.go:227] Setting addon default-storageclass=true in "bridge-210619"
	I0108 21:13:33.194971  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	W0108 21:13:33.194973  216777 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:13:33.194995  216777 host.go:66] Checking if "bridge-210619" exists ...
	I0108 21:13:33.195274  216777 cli_runner.go:164] Run: docker container inspect bridge-210619 --format={{.State.Status}}
	I0108 21:13:33.231750  216777 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:13:33.231774  216777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:13:33.231828  216777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-210619
	I0108 21:13:33.236878  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:33.246196  216777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:13:33.247684  216777 node_ready.go:35] waiting up to 5m0s for node "bridge-210619" to be "Ready" ...
	I0108 21:13:33.251319  216777 node_ready.go:49] node "bridge-210619" has status "Ready":"True"
	I0108 21:13:33.251338  216777 node_ready.go:38] duration metric: took 3.628359ms waiting for node "bridge-210619" to be "Ready" ...
	I0108 21:13:33.251348  216777 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:13:33.262099  216777 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-4lb8v" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:33.276921  216777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33007 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/bridge-210619/id_rsa Username:docker}
	I0108 21:13:33.426608  216777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:13:33.510974  216777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:13:34.248841  216777 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.002603398s)
	I0108 21:13:34.248877  216777 start.go:826] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0108 21:13:34.316883  216777 pod_ready.go:92] pod "coredns-565d847f94-4lb8v" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:34.316910  216777 pod_ready.go:81] duration metric: took 1.054785755s waiting for pod "coredns-565d847f94-4lb8v" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.316919  216777 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-p94wj" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.321040  216777 pod_ready.go:92] pod "coredns-565d847f94-p94wj" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:34.321059  216777 pod_ready.go:81] duration metric: took 4.13331ms waiting for pod "coredns-565d847f94-p94wj" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.321070  216777 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.324858  216777 pod_ready.go:92] pod "etcd-bridge-210619" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:34.324876  216777 pod_ready.go:81] duration metric: took 3.798682ms waiting for pod "etcd-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.324886  216777 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.329091  216777 pod_ready.go:92] pod "kube-apiserver-bridge-210619" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:34.329109  216777 pod_ready.go:81] duration metric: took 4.215377ms waiting for pod "kube-apiserver-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.329121  216777 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.361667  216777 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:13:34.363367  216777 addons.go:488] enableAddons completed in 1.211155086s
	I0108 21:13:34.451049  216777 pod_ready.go:92] pod "kube-controller-manager-bridge-210619" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:34.451072  216777 pod_ready.go:81] duration metric: took 121.938895ms waiting for pod "kube-controller-manager-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.451089  216777 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-dkw6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.850482  216777 pod_ready.go:92] pod "kube-proxy-dkw6p" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:34.850505  216777 pod_ready.go:81] duration metric: took 399.408128ms waiting for pod "kube-proxy-dkw6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:34.850519  216777 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:35.250933  216777 pod_ready.go:92] pod "kube-scheduler-bridge-210619" in "kube-system" namespace has status "Ready":"True"
	I0108 21:13:35.250957  216777 pod_ready.go:81] duration metric: took 400.430688ms waiting for pod "kube-scheduler-bridge-210619" in "kube-system" namespace to be "Ready" ...
	I0108 21:13:35.250969  216777 pod_ready.go:38] duration metric: took 1.999609096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:13:35.250987  216777 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:13:35.251038  216777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:35.261595  216777 api_server.go:71] duration metric: took 2.109534461s to wait for apiserver process to appear ...
	I0108 21:13:35.261621  216777 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:13:35.261634  216777 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:13:35.266991  216777 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:13:35.267828  216777 api_server.go:140] control plane version: v1.25.3
	I0108 21:13:35.267846  216777 api_server.go:130] duration metric: took 6.219248ms to wait for apiserver health ...
	I0108 21:13:35.267854  216777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:13:35.453424  216777 system_pods.go:59] 8 kube-system pods found
	I0108 21:13:35.453451  216777 system_pods.go:61] "coredns-565d847f94-4lb8v" [217615a6-833c-4616-a1bf-ef88ba31ab15] Running
	I0108 21:13:35.453456  216777 system_pods.go:61] "coredns-565d847f94-p94wj" [167851ba-618f-4519-83a1-2b4e68178659] Running
	I0108 21:13:35.453461  216777 system_pods.go:61] "etcd-bridge-210619" [56635811-60eb-40a0-8666-ca389fcc5de6] Running
	I0108 21:13:35.453467  216777 system_pods.go:61] "kube-apiserver-bridge-210619" [aed3c362-2c07-475a-93c1-2743cd862da0] Running
	I0108 21:13:35.453471  216777 system_pods.go:61] "kube-controller-manager-bridge-210619" [17ca8fa8-d9d6-4c17-9d65-838d53e031b8] Running
	I0108 21:13:35.453476  216777 system_pods.go:61] "kube-proxy-dkw6p" [fad52fd3-72ca-492e-919c-e3814bd408d4] Running
	I0108 21:13:35.453480  216777 system_pods.go:61] "kube-scheduler-bridge-210619" [65fc52da-81db-47ed-8fd0-224f85a6fcde] Running
	I0108 21:13:35.453487  216777 system_pods.go:61] "storage-provisioner" [65134efe-1b5f-4198-9c73-0c42474d9e57] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 21:13:35.453496  216777 system_pods.go:74] duration metric: took 185.637447ms to wait for pod list to return data ...
	I0108 21:13:35.453513  216777 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:13:35.650320  216777 default_sa.go:45] found service account: "default"
	I0108 21:13:35.650344  216777 default_sa.go:55] duration metric: took 196.825965ms for default service account to be created ...
	I0108 21:13:35.650356  216777 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:13:35.853952  216777 system_pods.go:86] 8 kube-system pods found
	I0108 21:13:35.853981  216777 system_pods.go:89] "coredns-565d847f94-4lb8v" [217615a6-833c-4616-a1bf-ef88ba31ab15] Running
	I0108 21:13:35.853989  216777 system_pods.go:89] "coredns-565d847f94-p94wj" [167851ba-618f-4519-83a1-2b4e68178659] Running
	I0108 21:13:35.854000  216777 system_pods.go:89] "etcd-bridge-210619" [56635811-60eb-40a0-8666-ca389fcc5de6] Running
	I0108 21:13:35.854007  216777 system_pods.go:89] "kube-apiserver-bridge-210619" [aed3c362-2c07-475a-93c1-2743cd862da0] Running
	I0108 21:13:35.854014  216777 system_pods.go:89] "kube-controller-manager-bridge-210619" [17ca8fa8-d9d6-4c17-9d65-838d53e031b8] Running
	I0108 21:13:35.854020  216777 system_pods.go:89] "kube-proxy-dkw6p" [fad52fd3-72ca-492e-919c-e3814bd408d4] Running
	I0108 21:13:35.854026  216777 system_pods.go:89] "kube-scheduler-bridge-210619" [65fc52da-81db-47ed-8fd0-224f85a6fcde] Running
	I0108 21:13:35.854032  216777 system_pods.go:89] "storage-provisioner" [65134efe-1b5f-4198-9c73-0c42474d9e57] Running
	I0108 21:13:35.854042  216777 system_pods.go:126] duration metric: took 203.679232ms to wait for k8s-apps to be running ...
	I0108 21:13:35.854058  216777 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:13:35.854099  216777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:13:35.863385  216777 system_svc.go:56] duration metric: took 9.322454ms WaitForService to wait for kubelet.
	I0108 21:13:35.863410  216777 kubeadm.go:573] duration metric: took 2.711351757s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:13:35.863434  216777 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:13:36.051398  216777 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:13:36.051430  216777 node_conditions.go:123] node cpu capacity is 8
	I0108 21:13:36.051444  216777 node_conditions.go:105] duration metric: took 188.004757ms to run NodePressure ...
	I0108 21:13:36.051457  216777 start.go:217] waiting for startup goroutines ...
	I0108 21:13:36.051832  216777 ssh_runner.go:195] Run: rm -f paused
	I0108 21:13:36.105204  216777 start.go:536] kubectl: 1.26.0, cluster: 1.25.3 (minor skew: 1)
	I0108 21:13:36.107792  216777 out.go:177] * Done! kubectl is now configured to use "bridge-210619" cluster and "default" namespace by default
	I0108 21:13:31.760432  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:34.259680  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:35.491744  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:35.616749  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:35.616817  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:35.641700  181838 cri.go:87] found id: ""
	I0108 21:13:35.641722  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.641730  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:35.641736  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:35.641791  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:35.665354  181838 cri.go:87] found id: ""
	I0108 21:13:35.665382  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.665390  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:35.665397  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:35.665445  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:35.688802  181838 cri.go:87] found id: ""
	I0108 21:13:35.688834  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.688844  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:35.688850  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:35.688890  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:35.712661  181838 cri.go:87] found id: ""
	I0108 21:13:35.712690  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.712699  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:35.712708  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:35.712768  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:35.737882  181838 cri.go:87] found id: ""
	I0108 21:13:35.737904  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.737913  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:35.737921  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:35.737974  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:35.763698  181838 cri.go:87] found id: ""
	I0108 21:13:35.763721  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.763728  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:35.763737  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:35.763791  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:35.786667  181838 cri.go:87] found id: ""
	I0108 21:13:35.786688  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.786694  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:35.786700  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:35.786747  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:35.809460  181838 cri.go:87] found id: ""
	I0108 21:13:35.809480  181838 logs.go:274] 0 containers: []
	W0108 21:13:35.809486  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:35.809494  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:35.809510  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:35.836773  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:35.836797  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:35.855656  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:45 kubernetes-upgrade-210902 kubelet[4753]: E0108 21:12:45.951223    4753 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.856191  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:46 kubernetes-upgrade-210902 kubelet[4764]: E0108 21:12:46.702469    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.856603  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:47 kubernetes-upgrade-210902 kubelet[4775]: E0108 21:12:47.477715    4775 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.857020  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4785]: E0108 21:12:48.211798    4785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.857393  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:48 kubernetes-upgrade-210902 kubelet[4794]: E0108 21:12:48.948316    4794 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.857749  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:49 kubernetes-upgrade-210902 kubelet[4804]: E0108 21:12:49.699460    4804 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.858115  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:50 kubernetes-upgrade-210902 kubelet[4814]: E0108 21:12:50.446250    4814 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.858482  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4825]: E0108 21:12:51.274290    4825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.858849  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:51 kubernetes-upgrade-210902 kubelet[4835]: E0108 21:12:51.981369    4835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.859393  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:52 kubernetes-upgrade-210902 kubelet[4846]: E0108 21:12:52.696913    4846 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.859912  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:53 kubernetes-upgrade-210902 kubelet[4857]: E0108 21:12:53.463386    4857 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.860297  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5003]: E0108 21:12:54.195862    5003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.860651  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:54 kubernetes-upgrade-210902 kubelet[5014]: E0108 21:12:54.973538    5014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.861024  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:55 kubernetes-upgrade-210902 kubelet[5025]: E0108 21:12:55.700041    5025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.861385  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.861738  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.862110  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.862477  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.862870  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.863230  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.863696  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.864052  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.864414  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.864784  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.865137  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.865490  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.865855  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.866219  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.866576  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.866946  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.867338  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.867824  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.868182  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.868537  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.868894  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.869262  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.869620  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.869975  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.870332  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.870716  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.871194  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.871800  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.872384  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.872974  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.873467  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.873834  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.874325  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.874920  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.875386  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.875831  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.876212  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.876565  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.876921  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.877282  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.877680  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.878035  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.878386  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.878742  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.879092  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.879461  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.879863  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.880219  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.880575  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.880932  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.881283  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.881640  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.881996  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:35.882114  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:35.882129  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:35.899296  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:35.899333  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:35.954201  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:35.954227  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:35.954242  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:35.988771  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:35.988797  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:35.988906  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:13:35.988917  181838 out.go:239]   Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988922  181838 out.go:239]   Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988926  181838 out.go:239]   Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988933  181838 out.go:239]   Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:35.988940  181838 out.go:239]   Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:35.988948  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:35.988954  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:36.260714  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:38.760852  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:41.260560  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:43.760444  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:45.760830  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:45.990303  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:46.117098  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:46.117168  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:46.141382  181838 cri.go:87] found id: ""
	I0108 21:13:46.141416  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.141425  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:46.141432  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:46.141499  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:46.164973  181838 cri.go:87] found id: ""
	I0108 21:13:46.164998  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.165007  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:46.165015  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:46.165066  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:46.189558  181838 cri.go:87] found id: ""
	I0108 21:13:46.189585  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.189594  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:46.189601  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:46.189651  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:46.213759  181838 cri.go:87] found id: ""
	I0108 21:13:46.213786  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.213794  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:46.213802  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:46.213856  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:46.236871  181838 cri.go:87] found id: ""
	I0108 21:13:46.236898  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.236908  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:46.236915  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:46.236961  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:46.262650  181838 cri.go:87] found id: ""
	I0108 21:13:46.262675  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.262683  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:46.262691  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:46.262732  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:46.285635  181838 cri.go:87] found id: ""
	I0108 21:13:46.285667  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.285674  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:46.285680  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:46.285720  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:46.308380  181838 cri.go:87] found id: ""
	I0108 21:13:46.308403  181838 logs.go:274] 0 containers: []
	W0108 21:13:46.308411  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:46.308422  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:46.308435  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:46.333656  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:46.333685  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:46.348670  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:56 kubernetes-upgrade-210902 kubelet[5036]: E0108 21:12:56.460751    5036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.349037  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5046]: E0108 21:12:57.202829    5046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.349396  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:57 kubernetes-upgrade-210902 kubelet[5056]: E0108 21:12:57.949309    5056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.349752  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:58 kubernetes-upgrade-210902 kubelet[5067]: E0108 21:12:58.695134    5067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.350108  181838 logs.go:138] Found kubelet problem: Jan 08 21:12:59 kubernetes-upgrade-210902 kubelet[5078]: E0108 21:12:59.453700    5078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.350499  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5090]: E0108 21:13:00.195809    5090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.351013  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:00 kubernetes-upgrade-210902 kubelet[5101]: E0108 21:13:00.949365    5101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.351608  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:01 kubernetes-upgrade-210902 kubelet[5112]: E0108 21:13:01.699957    5112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.351968  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:02 kubernetes-upgrade-210902 kubelet[5122]: E0108 21:13:02.445014    5122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.352362  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5133]: E0108 21:13:03.211715    5133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.352721  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:03 kubernetes-upgrade-210902 kubelet[5144]: E0108 21:13:03.945571    5144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.353079  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:04 kubernetes-upgrade-210902 kubelet[5291]: E0108 21:13:04.696837    5291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.353433  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:05 kubernetes-upgrade-210902 kubelet[5302]: E0108 21:13:05.455832    5302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.353786  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5311]: E0108 21:13:06.202481    5311 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.354138  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.354487  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.354858  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.355213  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.355604  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.356003  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.356356  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.356713  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.357066  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.357418  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.357774  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.358124  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.358482  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.358848  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.359212  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.359598  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.359951  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.360302  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.360658  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.361024  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.361374  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.361730  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.362144  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.362498  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.362856  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.363228  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.363607  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.363957  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.364307  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.364660  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.365008  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.365357  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.365711  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.366057  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.366406  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.366766  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.367117  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.367478  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.367857  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.368225  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.368578  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.368928  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.369313  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.369673  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.370042  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.370394  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.370753  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.371209  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.371630  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.371984  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.372369  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.372732  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.373095  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:46.373212  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:46.373226  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:46.390527  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:46.390557  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:46.447603  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:46.447625  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:46.447639  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:46.485360  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:46.485386  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:46.485503  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:13:46.485520  181838 out.go:239]   Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485529  181838 out.go:239]   Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485541  181838 out.go:239]   Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485547  181838 out.go:239]   Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:46.485555  181838 out.go:239]   Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:46.485559  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:46.485566  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:48.260044  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:50.260311  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:52.759872  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:54.760547  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:56.487053  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:13:56.617062  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:13:56.617126  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:13:56.644257  181838 cri.go:87] found id: ""
	I0108 21:13:56.644283  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.644291  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:13:56.644297  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:13:56.644348  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:13:56.669044  181838 cri.go:87] found id: ""
	I0108 21:13:56.669064  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.669070  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:13:56.669076  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:13:56.669120  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:13:56.692166  181838 cri.go:87] found id: ""
	I0108 21:13:56.692185  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.692191  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:13:56.692197  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:13:56.692236  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:13:56.714838  181838 cri.go:87] found id: ""
	I0108 21:13:56.714859  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.714865  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:13:56.714870  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:13:56.714919  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:13:56.740419  181838 cri.go:87] found id: ""
	I0108 21:13:56.740441  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.740450  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:13:56.740459  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:13:56.740538  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:13:56.767103  181838 cri.go:87] found id: ""
	I0108 21:13:56.767128  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.767135  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:13:56.767141  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:13:56.767180  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:13:56.793172  181838 cri.go:87] found id: ""
	I0108 21:13:56.793196  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.793204  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:13:56.793212  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:13:56.793250  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:13:56.816140  181838 cri.go:87] found id: ""
	I0108 21:13:56.816166  181838 logs.go:274] 0 containers: []
	W0108 21:13:56.816173  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:13:56.816182  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:13:56.816194  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:13:56.833793  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:06 kubernetes-upgrade-210902 kubelet[5321]: E0108 21:13:06.953379    5321 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.834382  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:07 kubernetes-upgrade-210902 kubelet[5332]: E0108 21:13:07.695770    5332 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.834970  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:08 kubernetes-upgrade-210902 kubelet[5342]: E0108 21:13:08.448723    5342 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.835401  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5352]: E0108 21:13:09.203634    5352 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.835789  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:09 kubernetes-upgrade-210902 kubelet[5363]: E0108 21:13:09.945857    5363 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.836149  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:10 kubernetes-upgrade-210902 kubelet[5373]: E0108 21:13:10.695131    5373 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.836514  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:11 kubernetes-upgrade-210902 kubelet[5384]: E0108 21:13:11.445518    5384 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.836884  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5395]: E0108 21:13:12.198327    5395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.837255  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:12 kubernetes-upgrade-210902 kubelet[5405]: E0108 21:13:12.948375    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.837613  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:13 kubernetes-upgrade-210902 kubelet[5415]: E0108 21:13:13.704963    5415 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.838019  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:14 kubernetes-upgrade-210902 kubelet[5426]: E0108 21:13:14.445641    5426 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.838374  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5575]: E0108 21:13:15.199671    5575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.838726  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:15 kubernetes-upgrade-210902 kubelet[5585]: E0108 21:13:15.945043    5585 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.839096  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:16 kubernetes-upgrade-210902 kubelet[5596]: E0108 21:13:16.695191    5596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.839463  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.839840  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.840195  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.840570  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.840924  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.841274  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.841627  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.841983  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.842353  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.842710  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.843061  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.843409  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.843812  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.844177  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.844533  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.844895  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.845252  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.845608  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.845984  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.846340  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.846695  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.847048  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.847402  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.847781  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.848153  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.848536  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.848899  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.849272  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.849652  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.850013  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.850377  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.850742  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.851265  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.851829  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.852226  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.852601  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.852968  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.853338  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.853716  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.854079  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:46 kubernetes-upgrade-210902 kubelet[6451]: E0108 21:13:46.694345    6451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.854448  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:47 kubernetes-upgrade-210902 kubelet[6461]: E0108 21:13:47.485269    6461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.854819  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6471]: E0108 21:13:48.194061    6471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.855191  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6482]: E0108 21:13:48.947504    6482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.855603  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:49 kubernetes-upgrade-210902 kubelet[6493]: E0108 21:13:49.694384    6493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.856000  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:50 kubernetes-upgrade-210902 kubelet[6504]: E0108 21:13:50.445872    6504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.856372  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6515]: E0108 21:13:51.198152    6515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.856744  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6525]: E0108 21:13:51.945447    6525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.857112  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:52 kubernetes-upgrade-210902 kubelet[6536]: E0108 21:13:52.693871    6536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.857474  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.857847  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.858214  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.858594  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.858957  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:56.859090  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:13:56.859109  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:13:56.879005  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:13:56.879044  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:13:56.934680  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:13:56.934709  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:13:56.934722  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:13:56.968969  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:13:56.968997  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:13:56.994718  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:56.994740  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:13:56.994837  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:13:56.994849  181838 out.go:239]   Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994854  181838 out.go:239]   Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994859  181838 out.go:239]   Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994865  181838 out.go:239]   Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:13:56.994871  181838 out.go:239]   Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:13:56.994875  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:56.994880  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:56.760942  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:59.260238  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:01.759947  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:03.760067  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:05.760630  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:06.996209  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:14:07.117290  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:14:07.117354  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:14:07.141283  181838 cri.go:87] found id: ""
	I0108 21:14:07.141303  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.141309  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:14:07.141315  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:14:07.141352  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:14:07.164314  181838 cri.go:87] found id: ""
	I0108 21:14:07.164341  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.164351  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:14:07.164358  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:14:07.164399  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:14:07.187028  181838 cri.go:87] found id: ""
	I0108 21:14:07.187057  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.187063  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:14:07.187069  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:14:07.187109  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:14:07.211446  181838 cri.go:87] found id: ""
	I0108 21:14:07.211467  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.211491  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:14:07.211499  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:14:07.211552  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:14:07.236272  181838 cri.go:87] found id: ""
	I0108 21:14:07.236297  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.236305  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:14:07.236312  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:14:07.236367  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:14:07.261316  181838 cri.go:87] found id: ""
	I0108 21:14:07.261339  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.261346  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:14:07.261354  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:14:07.261410  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:14:07.283963  181838 cri.go:87] found id: ""
	I0108 21:14:07.283982  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.283989  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:14:07.283995  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:14:07.284036  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:14:07.307456  181838 cri.go:87] found id: ""
	I0108 21:14:07.307509  181838 logs.go:274] 0 containers: []
	W0108 21:14:07.307519  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:14:07.307532  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:14:07.307547  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:14:07.324202  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:17 kubernetes-upgrade-210902 kubelet[5607]: E0108 21:13:17.473576    5607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.324598  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5617]: E0108 21:13:18.200053    5617 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.324986  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:18 kubernetes-upgrade-210902 kubelet[5628]: E0108 21:13:18.947856    5628 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.325356  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:19 kubernetes-upgrade-210902 kubelet[5639]: E0108 21:13:19.696468    5639 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.325773  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:20 kubernetes-upgrade-210902 kubelet[5651]: E0108 21:13:20.445123    5651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.326140  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5662]: E0108 21:13:21.195995    5662 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.326498  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:21 kubernetes-upgrade-210902 kubelet[5674]: E0108 21:13:21.946669    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.326850  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:22 kubernetes-upgrade-210902 kubelet[5685]: E0108 21:13:22.695066    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.327201  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:23 kubernetes-upgrade-210902 kubelet[5696]: E0108 21:13:23.446714    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.327588  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5708]: E0108 21:13:24.196595    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.327941  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:24 kubernetes-upgrade-210902 kubelet[5719]: E0108 21:13:24.945926    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.328294  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:25 kubernetes-upgrade-210902 kubelet[5868]: E0108 21:13:25.696363    5868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.328648  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:26 kubernetes-upgrade-210902 kubelet[5879]: E0108 21:13:26.446871    5879 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.329009  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5890]: E0108 21:13:27.196689    5890 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.329357  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.329802  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.330179  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.330537  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.330890  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.331239  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.331659  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.332014  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.332385  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.332763  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.333124  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.333598  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.333960  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.334311  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.334707  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.335059  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.335413  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.335802  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.336160  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.336536  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.336928  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.337282  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.337643  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.337994  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.338342  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.338694  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:46 kubernetes-upgrade-210902 kubelet[6451]: E0108 21:13:46.694345    6451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.339042  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:47 kubernetes-upgrade-210902 kubelet[6461]: E0108 21:13:47.485269    6461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.339405  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6471]: E0108 21:13:48.194061    6471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.339803  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6482]: E0108 21:13:48.947504    6482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.340155  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:49 kubernetes-upgrade-210902 kubelet[6493]: E0108 21:13:49.694384    6493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.340520  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:50 kubernetes-upgrade-210902 kubelet[6504]: E0108 21:13:50.445872    6504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.340879  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6515]: E0108 21:13:51.198152    6515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.341236  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6525]: E0108 21:13:51.945447    6525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.341589  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:52 kubernetes-upgrade-210902 kubelet[6536]: E0108 21:13:52.693871    6536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.341958  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.342326  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.342680  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.343053  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.343423  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.343903  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6738]: E0108 21:13:57.193172    6738 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.344275  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6749]: E0108 21:13:57.946361    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.344668  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:58 kubernetes-upgrade-210902 kubelet[6761]: E0108 21:13:58.698549    6761 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.345025  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:59 kubernetes-upgrade-210902 kubelet[6773]: E0108 21:13:59.444839    6773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.345375  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6784]: E0108 21:14:00.195304    6784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.345820  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6795]: E0108 21:14:00.946849    6795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.346222  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:01 kubernetes-upgrade-210902 kubelet[6806]: E0108 21:14:01.697208    6806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.346735  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:02 kubernetes-upgrade-210902 kubelet[6818]: E0108 21:14:02.447776    6818 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.347119  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6830]: E0108 21:14:03.200414    6830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.347514  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.347999  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.348362  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.348722  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.349084  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:07.349250  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:14:07.349267  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:14:07.368098  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:14:07.368124  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:14:07.434187  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:14:07.434211  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:14:07.434221  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:14:07.481079  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:14:07.481116  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:14:07.511178  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:07.511204  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:14:07.511309  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:14:07.511326  181838 out.go:239]   Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511334  181838 out.go:239]   Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511341  181838 out.go:239]   Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511350  181838 out.go:239]   Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:07.511357  181838 out.go:239]   Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:07.511365  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:07.511372  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:08.261268  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:10.760604  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:13.260530  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:15.760003  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:17.512442  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:14:17.617425  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:14:17.617502  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:14:17.642169  181838 cri.go:87] found id: ""
	I0108 21:14:17.642197  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.642205  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:14:17.642212  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:14:17.642252  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:14:17.666495  181838 cri.go:87] found id: ""
	I0108 21:14:17.666516  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.666522  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:14:17.666528  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:14:17.666567  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:14:17.690914  181838 cri.go:87] found id: ""
	I0108 21:14:17.690933  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.690939  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:14:17.690945  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:14:17.690986  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:14:17.714578  181838 cri.go:87] found id: ""
	I0108 21:14:17.714598  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.714604  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:14:17.714613  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:14:17.714659  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:14:17.737953  181838 cri.go:87] found id: ""
	I0108 21:14:17.737973  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.737980  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:14:17.737988  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:14:17.738032  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:14:17.762248  181838 cri.go:87] found id: ""
	I0108 21:14:17.762269  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.762276  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:14:17.762284  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:14:17.762340  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:14:17.785902  181838 cri.go:87] found id: ""
	I0108 21:14:17.785925  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.785932  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:14:17.785939  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:14:17.785986  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:14:17.809097  181838 cri.go:87] found id: ""
	I0108 21:14:17.809127  181838 logs.go:274] 0 containers: []
	W0108 21:14:17.809136  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:14:17.809207  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:14:17.809246  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:14:17.825478  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:27 kubernetes-upgrade-210902 kubelet[5901]: E0108 21:13:27.945492    5901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.825850  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:28 kubernetes-upgrade-210902 kubelet[5913]: E0108 21:13:28.700535    5913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.826205  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:29 kubernetes-upgrade-210902 kubelet[5924]: E0108 21:13:29.447713    5924 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.826563  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5935]: E0108 21:13:30.195990    5935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.826919  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:30 kubernetes-upgrade-210902 kubelet[5946]: E0108 21:13:30.946203    5946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.827282  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:31 kubernetes-upgrade-210902 kubelet[5958]: E0108 21:13:31.695521    5958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.827695  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:32 kubernetes-upgrade-210902 kubelet[5969]: E0108 21:13:32.454066    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.828067  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5979]: E0108 21:13:33.216578    5979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.828420  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:33 kubernetes-upgrade-210902 kubelet[5989]: E0108 21:13:33.945429    5989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.828772  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:34 kubernetes-upgrade-210902 kubelet[6000]: E0108 21:13:34.696181    6000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.829120  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:35 kubernetes-upgrade-210902 kubelet[6010]: E0108 21:13:35.445631    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.829468  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6161]: E0108 21:13:36.214784    6161 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.829823  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:36 kubernetes-upgrade-210902 kubelet[6171]: E0108 21:13:36.947039    6171 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.830181  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:37 kubernetes-upgrade-210902 kubelet[6182]: E0108 21:13:37.695548    6182 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.830534  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:38 kubernetes-upgrade-210902 kubelet[6193]: E0108 21:13:38.469776    6193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.830887  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6203]: E0108 21:13:39.204546    6203 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.831246  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:39 kubernetes-upgrade-210902 kubelet[6213]: E0108 21:13:39.944879    6213 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.831619  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:40 kubernetes-upgrade-210902 kubelet[6224]: E0108 21:13:40.695541    6224 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.831975  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:41 kubernetes-upgrade-210902 kubelet[6235]: E0108 21:13:41.446136    6235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.832398  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6246]: E0108 21:13:42.195645    6246 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.832755  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:42 kubernetes-upgrade-210902 kubelet[6257]: E0108 21:13:42.945994    6257 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.833131  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:43 kubernetes-upgrade-210902 kubelet[6268]: E0108 21:13:43.695611    6268 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.833485  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:44 kubernetes-upgrade-210902 kubelet[6279]: E0108 21:13:44.446951    6279 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.833846  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6290]: E0108 21:13:45.195689    6290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.834222  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:45 kubernetes-upgrade-210902 kubelet[6301]: E0108 21:13:45.945075    6301 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.834576  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:46 kubernetes-upgrade-210902 kubelet[6451]: E0108 21:13:46.694345    6451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.834941  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:47 kubernetes-upgrade-210902 kubelet[6461]: E0108 21:13:47.485269    6461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.835298  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6471]: E0108 21:13:48.194061    6471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.835674  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:48 kubernetes-upgrade-210902 kubelet[6482]: E0108 21:13:48.947504    6482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.836025  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:49 kubernetes-upgrade-210902 kubelet[6493]: E0108 21:13:49.694384    6493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.836377  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:50 kubernetes-upgrade-210902 kubelet[6504]: E0108 21:13:50.445872    6504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.836732  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6515]: E0108 21:13:51.198152    6515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.837085  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:51 kubernetes-upgrade-210902 kubelet[6525]: E0108 21:13:51.945447    6525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.837443  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:52 kubernetes-upgrade-210902 kubelet[6536]: E0108 21:13:52.693871    6536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.837806  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:53 kubernetes-upgrade-210902 kubelet[6548]: E0108 21:13:53.446740    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.838158  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6559]: E0108 21:13:54.196067    6559 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.838509  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:54 kubernetes-upgrade-210902 kubelet[6569]: E0108 21:13:54.946137    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.838869  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:55 kubernetes-upgrade-210902 kubelet[6580]: E0108 21:13:55.695271    6580 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.839258  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:56 kubernetes-upgrade-210902 kubelet[6591]: E0108 21:13:56.445560    6591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.839645  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6738]: E0108 21:13:57.193172    6738 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.840012  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:57 kubernetes-upgrade-210902 kubelet[6749]: E0108 21:13:57.946361    6749 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.840366  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:58 kubernetes-upgrade-210902 kubelet[6761]: E0108 21:13:58.698549    6761 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.840724  181838 logs.go:138] Found kubelet problem: Jan 08 21:13:59 kubernetes-upgrade-210902 kubelet[6773]: E0108 21:13:59.444839    6773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.841094  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6784]: E0108 21:14:00.195304    6784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.841452  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:00 kubernetes-upgrade-210902 kubelet[6795]: E0108 21:14:00.946849    6795 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.841808  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:01 kubernetes-upgrade-210902 kubelet[6806]: E0108 21:14:01.697208    6806 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.842196  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:02 kubernetes-upgrade-210902 kubelet[6818]: E0108 21:14:02.447776    6818 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.842548  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6830]: E0108 21:14:03.200414    6830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.842903  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:03 kubernetes-upgrade-210902 kubelet[6842]: E0108 21:14:03.946176    6842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.843254  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:04 kubernetes-upgrade-210902 kubelet[6853]: E0108 21:14:04.696838    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.843706  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:05 kubernetes-upgrade-210902 kubelet[6865]: E0108 21:14:05.447467    6865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.844059  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6877]: E0108 21:14:06.196870    6877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.844411  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:06 kubernetes-upgrade-210902 kubelet[6888]: E0108 21:14:06.947179    6888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.844768  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:07 kubernetes-upgrade-210902 kubelet[7035]: E0108 21:14:07.695296    7035 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.845133  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:08 kubernetes-upgrade-210902 kubelet[7047]: E0108 21:14:08.447815    7047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.845544  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:09 kubernetes-upgrade-210902 kubelet[7058]: E0108 21:14:09.198501    7058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.845919  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:09 kubernetes-upgrade-210902 kubelet[7069]: E0108 21:14:09.947510    7069 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.846353  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:10 kubernetes-upgrade-210902 kubelet[7080]: E0108 21:14:10.716470    7080 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.846856  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:11 kubernetes-upgrade-210902 kubelet[7091]: E0108 21:14:11.448711    7091 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.847239  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:12 kubernetes-upgrade-210902 kubelet[7103]: E0108 21:14:12.196512    7103 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.847614  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:12 kubernetes-upgrade-210902 kubelet[7114]: E0108 21:14:12.947726    7114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.847985  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:13 kubernetes-upgrade-210902 kubelet[7125]: E0108 21:14:13.695591    7125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.848357  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:14 kubernetes-upgrade-210902 kubelet[7136]: E0108 21:14:14.445907    7136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.848722  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7147]: E0108 21:14:15.197969    7147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.849072  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7159]: E0108 21:14:15.945679    7159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.849486  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:16 kubernetes-upgrade-210902 kubelet[7170]: E0108 21:14:16.696174    7170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.849907  181838 logs.go:138] Found kubelet problem: Jan 08 21:14:17 kubernetes-upgrade-210902 kubelet[7181]: E0108 21:14:17.473919    7181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:17.850058  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:14:17.850074  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:14:17.870317  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:14:17.870344  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:14:17.926427  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:14:17.926447  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:14:17.926457  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:14:17.962577  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:14:17.962607  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:14:17.990544  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:17.990567  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:14:17.990670  181838 out.go:239] X Problems detected in kubelet:
	W0108 21:14:17.990682  181838 out.go:239]   Jan 08 21:14:14 kubernetes-upgrade-210902 kubelet[7136]: E0108 21:14:14.445907    7136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990688  181838 out.go:239]   Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7147]: E0108 21:14:15.197969    7147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990696  181838 out.go:239]   Jan 08 21:14:15 kubernetes-upgrade-210902 kubelet[7159]: E0108 21:14:15.945679    7159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990703  181838 out.go:239]   Jan 08 21:14:16 kubernetes-upgrade-210902 kubelet[7170]: E0108 21:14:16.696174    7170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:14:17.990707  181838 out.go:239]   Jan 08 21:14:17 kubernetes-upgrade-210902 kubelet[7181]: E0108 21:14:17.473919    7181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:14:17.990712  181838 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:17.990716  181838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:17.760847  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:20.259734  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:22.759685  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:24.760280  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:27.992616  181838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:14:28.001516  181838 kubeadm.go:631] restartCluster took 4m10.859106409s
	W0108 21:14:28.001655  181838 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0108 21:14:28.001688  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:14:29.868567  181838 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.866858883s)
	I0108 21:14:29.868617  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:14:29.879623  181838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:14:29.887216  181838 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:14:29.887277  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:14:29.894159  181838 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:14:29.894203  181838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:14:29.927638  181838 kubeadm.go:317] W0108 21:14:29.926890    8509 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:14:29.960107  181838 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:14:30.022728  181838 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:14:27.260100  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:29.261283  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:31.760778  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:33.760930  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:36.259418  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:38.260249  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:40.760365  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:43.260226  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:45.760166  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:47.760225  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:50.260237  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:52.260658  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:54.759950  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:56.761970  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:59.260205  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:01.760508  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:04.260175  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:06.260713  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:08.261203  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:10.262292  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:12.759652  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:14.759911  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:16.760336  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:19.260224  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:21.759829  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:23.760636  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:25.760800  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:28.260216  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:30.760110  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:33.259329  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:35.260623  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:37.760582  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:40.260438  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:42.760414  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:45.260032  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:47.760544  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:47.764619  197178 pod_ready.go:81] duration metric: took 4m0.0168111s waiting for pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace to be "Ready" ...
	E0108 21:15:47.764643  197178 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0108 21:15:47.764653  197178 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-94vvb" in "kube-system" namespace to be "Ready" ...
	I0108 21:15:49.775028  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:51.775816  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:54.274676  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:56.275250  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:58.275946  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:00.276006  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:02.774818  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:04.775087  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:07.275002  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:09.275313  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:11.775293  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:13.775560  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:15.775908  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:18.275457  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:20.777623  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:26.082331  181838 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 21:16:26.082444  181838 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 21:16:26.085125  181838 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:16:26.085205  181838 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:16:26.085310  181838 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:16:26.085402  181838 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:16:26.085447  181838 kubeadm.go:317] OS: Linux
	I0108 21:16:26.085486  181838 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:16:26.085526  181838 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:16:26.085565  181838 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:16:26.085605  181838 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:16:26.085667  181838 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:16:26.085714  181838 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:16:26.085792  181838 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:16:26.085836  181838 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:16:26.085885  181838 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:16:26.085985  181838 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:16:26.086106  181838 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:16:26.086190  181838 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:16:26.086245  181838 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:16:26.088374  181838 out.go:204]   - Generating certificates and keys ...
	I0108 21:16:26.088447  181838 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:16:26.088534  181838 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:16:26.088602  181838 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:16:26.088652  181838 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:16:26.088714  181838 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:16:26.088780  181838 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:16:26.088837  181838 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:16:26.088890  181838 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:16:26.088957  181838 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:16:26.089016  181838 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:16:26.089052  181838 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:16:26.089096  181838 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:16:26.089175  181838 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:16:26.089224  181838 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:16:26.089287  181838 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:16:26.089337  181838 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:16:26.089444  181838 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:16:26.089536  181838 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:16:26.089588  181838 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:16:26.089681  181838 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:16:23.275310  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:25.276055  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:26.091233  181838 out.go:204]   - Booting up control plane ...
	I0108 21:16:26.091316  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:16:26.091398  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:16:26.091468  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:16:26.091577  181838 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:16:26.091717  181838 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:16:26.091768  181838 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 21:16:26.091826  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.091993  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092079  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092246  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092302  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092453  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092508  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092681  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092744  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:16:26.092896  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:16:26.092902  181838 kubeadm.go:317] 
	I0108 21:16:26.092936  181838 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 21:16:26.092981  181838 kubeadm.go:317] 	timed out waiting for the condition
	I0108 21:16:26.092988  181838 kubeadm.go:317] 
	I0108 21:16:26.093015  181838 kubeadm.go:317] This error is likely caused by:
	I0108 21:16:26.093043  181838 kubeadm.go:317] 	- The kubelet is not running
	I0108 21:16:26.093130  181838 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 21:16:26.093136  181838 kubeadm.go:317] 
	I0108 21:16:26.093260  181838 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 21:16:26.093316  181838 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 21:16:26.093361  181838 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 21:16:26.093375  181838 kubeadm.go:317] 
	I0108 21:16:26.093499  181838 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 21:16:26.093577  181838 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 21:16:26.093651  181838 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0108 21:16:26.093736  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0108 21:16:26.093830  181838 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 21:16:26.093909  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	W0108 21:16:26.094151  181838 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:14:29.926890    8509 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 21:16:26.094195  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:16:27.945693  181838 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.851473031s)
	I0108 21:16:27.945756  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:16:27.955421  181838 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:16:27.955506  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:16:27.962747  181838 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:16:27.962788  181838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:16:27.997191  181838 kubeadm.go:317] W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:16:28.033757  181838 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:16:28.097457  181838 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:16:27.775202  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:30.275307  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:32.774913  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:34.775684  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:36.776386  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:39.274564  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:41.275106  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:43.275332  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:45.775266  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:48.274572  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:50.275851  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:52.774873  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:54.775151  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:57.275463  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:59.774447  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:01.775447  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:03.776272  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:06.274775  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:08.775799  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:11.275020  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:13.275599  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:15.775757  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:18.275287  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:20.775550  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:23.274742  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:25.275400  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:27.775025  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:29.775298  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:31.775866  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:34.275210  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:36.275422  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:38.774878  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:40.775011  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:42.776052  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:45.275343  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:47.275620  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:49.775458  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:52.275419  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:54.775432  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:57.275095  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:59.775231  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:02.274708  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:04.775148  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:06.775928  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:09.275384  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:11.779032  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:14.275256  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:16.275823  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:18.774724  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:20.775303  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:23.877590  181838 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 21:18:23.877729  181838 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 21:18:23.880688  181838 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:18:23.880765  181838 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:18:23.880880  181838 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:18:23.880936  181838 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:18:23.880969  181838 kubeadm.go:317] OS: Linux
	I0108 21:18:23.881009  181838 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:18:23.881086  181838 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:18:23.881163  181838 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:18:23.881233  181838 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:18:23.881298  181838 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:18:23.881356  181838 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:18:23.881398  181838 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:18:23.881448  181838 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:18:23.881486  181838 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:18:23.881545  181838 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:18:23.881630  181838 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:18:23.881718  181838 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:18:23.881772  181838 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:18:23.883791  181838 out.go:204]   - Generating certificates and keys ...
	I0108 21:18:23.883864  181838 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:18:23.883937  181838 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:18:23.883999  181838 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:18:23.884052  181838 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:18:23.884127  181838 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:18:23.884184  181838 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:18:23.884236  181838 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:18:23.884297  181838 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:18:23.884361  181838 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:18:23.884434  181838 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:18:23.884472  181838 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:18:23.884524  181838 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:18:23.884566  181838 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:18:23.884609  181838 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:18:23.884667  181838 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:18:23.884734  181838 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:18:23.884822  181838 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:18:23.884894  181838 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:18:23.884936  181838 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:18:23.884992  181838 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:18:23.886673  181838 out.go:204]   - Booting up control plane ...
	I0108 21:18:23.886750  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:18:23.886829  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:18:23.886909  181838 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:18:23.886977  181838 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:18:23.887108  181838 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:18:23.887178  181838 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 21:18:23.887245  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.887408  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.887467  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.887664  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.887733  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.887925  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.887988  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.888156  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.888224  181838 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 21:18:23.888408  181838 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 21:18:23.888422  181838 kubeadm.go:317] 
	I0108 21:18:23.888467  181838 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 21:18:23.888516  181838 kubeadm.go:317] 	timed out waiting for the condition
	I0108 21:18:23.888524  181838 kubeadm.go:317] 
	I0108 21:18:23.888551  181838 kubeadm.go:317] This error is likely caused by:
	I0108 21:18:23.888579  181838 kubeadm.go:317] 	- The kubelet is not running
	I0108 21:18:23.888671  181838 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 21:18:23.888679  181838 kubeadm.go:317] 
	I0108 21:18:23.888772  181838 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 21:18:23.888806  181838 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 21:18:23.888831  181838 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 21:18:23.888837  181838 kubeadm.go:317] 
	I0108 21:18:23.888933  181838 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 21:18:23.889026  181838 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 21:18:23.889098  181838 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0108 21:18:23.889207  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0108 21:18:23.889294  181838 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 21:18:23.889416  181838 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0108 21:18:23.889430  181838 kubeadm.go:398] StartCluster complete in 8m6.778484736s
	I0108 21:18:23.889460  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:18:23.889508  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:18:23.915356  181838 cri.go:87] found id: ""
	I0108 21:18:23.915377  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.915382  181838 logs.go:276] No container was found matching "kube-apiserver"
	I0108 21:18:23.915388  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0108 21:18:23.915439  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:18:23.938566  181838 cri.go:87] found id: ""
	I0108 21:18:23.938594  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.938603  181838 logs.go:276] No container was found matching "etcd"
	I0108 21:18:23.938610  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0108 21:18:23.938724  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:18:23.962049  181838 cri.go:87] found id: ""
	I0108 21:18:23.962090  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.962099  181838 logs.go:276] No container was found matching "coredns"
	I0108 21:18:23.962107  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:18:23.962164  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:18:23.985149  181838 cri.go:87] found id: ""
	I0108 21:18:23.985170  181838 logs.go:274] 0 containers: []
	W0108 21:18:23.985175  181838 logs.go:276] No container was found matching "kube-scheduler"
	I0108 21:18:23.985186  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:18:23.985226  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:18:24.009725  181838 cri.go:87] found id: ""
	I0108 21:18:24.009750  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.009756  181838 logs.go:276] No container was found matching "kube-proxy"
	I0108 21:18:24.009764  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0108 21:18:24.009830  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0108 21:18:24.032795  181838 cri.go:87] found id: ""
	I0108 21:18:24.032816  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.032822  181838 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 21:18:24.032829  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:18:24.032873  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:18:24.055830  181838 cri.go:87] found id: ""
	I0108 21:18:24.055856  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.055864  181838 logs.go:276] No container was found matching "storage-provisioner"
	I0108 21:18:24.055873  181838 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:18:24.055926  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:18:24.080022  181838 cri.go:87] found id: ""
	I0108 21:18:24.080045  181838 logs.go:274] 0 containers: []
	W0108 21:18:24.080054  181838 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 21:18:24.080065  181838 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:18:24.080087  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 21:18:24.136653  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 21:18:24.136684  181838 logs.go:123] Gathering logs for containerd ...
	I0108 21:18:24.136697  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0108 21:18:24.192311  181838 logs.go:123] Gathering logs for container status ...
	I0108 21:18:24.192341  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:18:24.217880  181838 logs.go:123] Gathering logs for kubelet ...
	I0108 21:18:24.217906  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:18:24.234276  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12477]: E0108 21:17:34.195356   12477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.234678  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12488]: E0108 21:17:34.945977   12488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.235054  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:35 kubernetes-upgrade-210902 kubelet[12499]: E0108 21:17:35.695839   12499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.235430  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:36 kubernetes-upgrade-210902 kubelet[12510]: E0108 21:17:36.446635   12510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.235839  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:37 kubernetes-upgrade-210902 kubelet[12521]: E0108 21:17:37.195551   12521 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.236237  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:37 kubernetes-upgrade-210902 kubelet[12532]: E0108 21:17:37.945693   12532 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.236651  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:38 kubernetes-upgrade-210902 kubelet[12543]: E0108 21:17:38.697236   12543 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.237025  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:39 kubernetes-upgrade-210902 kubelet[12554]: E0108 21:17:39.446142   12554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.237424  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:40 kubernetes-upgrade-210902 kubelet[12564]: E0108 21:17:40.198497   12564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.237813  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:40 kubernetes-upgrade-210902 kubelet[12575]: E0108 21:17:40.949081   12575 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.238187  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:41 kubernetes-upgrade-210902 kubelet[12586]: E0108 21:17:41.700889   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.238573  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:42 kubernetes-upgrade-210902 kubelet[12597]: E0108 21:17:42.447188   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.238957  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:43 kubernetes-upgrade-210902 kubelet[12608]: E0108 21:17:43.196665   12608 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.239350  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:43 kubernetes-upgrade-210902 kubelet[12619]: E0108 21:17:43.947683   12619 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.239727  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:44 kubernetes-upgrade-210902 kubelet[12631]: E0108 21:17:44.696205   12631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.240076  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:45 kubernetes-upgrade-210902 kubelet[12643]: E0108 21:17:45.445694   12643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.240424  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:46 kubernetes-upgrade-210902 kubelet[12654]: E0108 21:17:46.194329   12654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.240776  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:46 kubernetes-upgrade-210902 kubelet[12665]: E0108 21:17:46.947126   12665 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.241129  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:47 kubernetes-upgrade-210902 kubelet[12677]: E0108 21:17:47.696089   12677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.241474  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:48 kubernetes-upgrade-210902 kubelet[12688]: E0108 21:17:48.447965   12688 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.241839  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:49 kubernetes-upgrade-210902 kubelet[12698]: E0108 21:17:49.195891   12698 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.242191  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:49 kubernetes-upgrade-210902 kubelet[12709]: E0108 21:17:49.945540   12709 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.242546  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:50 kubernetes-upgrade-210902 kubelet[12720]: E0108 21:17:50.697305   12720 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.242895  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:51 kubernetes-upgrade-210902 kubelet[12731]: E0108 21:17:51.445397   12731 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.243237  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:52 kubernetes-upgrade-210902 kubelet[12742]: E0108 21:17:52.197802   12742 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.243655  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:52 kubernetes-upgrade-210902 kubelet[12754]: E0108 21:17:52.948353   12754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.244011  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:53 kubernetes-upgrade-210902 kubelet[12765]: E0108 21:17:53.695621   12765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.244360  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:54 kubernetes-upgrade-210902 kubelet[12776]: E0108 21:17:54.446114   12776 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.244710  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:55 kubernetes-upgrade-210902 kubelet[12788]: E0108 21:17:55.196617   12788 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.245053  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:55 kubernetes-upgrade-210902 kubelet[12799]: E0108 21:17:55.945382   12799 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.245402  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:56 kubernetes-upgrade-210902 kubelet[12810]: E0108 21:17:56.697206   12810 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.245753  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:57 kubernetes-upgrade-210902 kubelet[12822]: E0108 21:17:57.464332   12822 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.246100  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:58 kubernetes-upgrade-210902 kubelet[12833]: E0108 21:17:58.196068   12833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.246444  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:58 kubernetes-upgrade-210902 kubelet[12844]: E0108 21:17:58.945025   12844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.246790  181838 logs.go:138] Found kubelet problem: Jan 08 21:17:59 kubernetes-upgrade-210902 kubelet[12855]: E0108 21:17:59.695697   12855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.247133  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:00 kubernetes-upgrade-210902 kubelet[12866]: E0108 21:18:00.444458   12866 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.247489  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:01 kubernetes-upgrade-210902 kubelet[12877]: E0108 21:18:01.194710   12877 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.247897  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:01 kubernetes-upgrade-210902 kubelet[12888]: E0108 21:18:01.946907   12888 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.248447  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:02 kubernetes-upgrade-210902 kubelet[12899]: E0108 21:18:02.695252   12899 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.248961  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:03 kubernetes-upgrade-210902 kubelet[12910]: E0108 21:18:03.446945   12910 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.249483  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:04 kubernetes-upgrade-210902 kubelet[12921]: E0108 21:18:04.195429   12921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.249972  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:04 kubernetes-upgrade-210902 kubelet[12932]: E0108 21:18:04.944934   12932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.250331  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:05 kubernetes-upgrade-210902 kubelet[12943]: E0108 21:18:05.697334   12943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.250739  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:06 kubernetes-upgrade-210902 kubelet[12955]: E0108 21:18:06.446534   12955 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.251137  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:07 kubernetes-upgrade-210902 kubelet[12966]: E0108 21:18:07.198042   12966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.251562  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:07 kubernetes-upgrade-210902 kubelet[12978]: E0108 21:18:07.944048   12978 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.251921  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:08 kubernetes-upgrade-210902 kubelet[12989]: E0108 21:18:08.695660   12989 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.252266  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:09 kubernetes-upgrade-210902 kubelet[13000]: E0108 21:18:09.446303   13000 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.252614  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:10 kubernetes-upgrade-210902 kubelet[13010]: E0108 21:18:10.197338   13010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.252969  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:10 kubernetes-upgrade-210902 kubelet[13021]: E0108 21:18:10.947025   13021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.253315  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:11 kubernetes-upgrade-210902 kubelet[13033]: E0108 21:18:11.699272   13033 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.253670  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:12 kubernetes-upgrade-210902 kubelet[13045]: E0108 21:18:12.446833   13045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.254068  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:13 kubernetes-upgrade-210902 kubelet[13057]: E0108 21:18:13.196374   13057 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.254420  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:13 kubernetes-upgrade-210902 kubelet[13067]: E0108 21:18:13.946024   13067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.254771  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:14 kubernetes-upgrade-210902 kubelet[13078]: E0108 21:18:14.696082   13078 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.255145  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:15 kubernetes-upgrade-210902 kubelet[13089]: E0108 21:18:15.447827   13089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.255515  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:16 kubernetes-upgrade-210902 kubelet[13100]: E0108 21:18:16.195195   13100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.256033  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:16 kubernetes-upgrade-210902 kubelet[13111]: E0108 21:18:16.944749   13111 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.256460  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:17 kubernetes-upgrade-210902 kubelet[13122]: E0108 21:18:17.695276   13122 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.256827  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:18 kubernetes-upgrade-210902 kubelet[13133]: E0108 21:18:18.453924   13133 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.257179  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:19 kubernetes-upgrade-210902 kubelet[13144]: E0108 21:18:19.202840   13144 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.257535  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:19 kubernetes-upgrade-210902 kubelet[13155]: E0108 21:18:19.945755   13155 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.257885  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:20 kubernetes-upgrade-210902 kubelet[13166]: E0108 21:18:20.696665   13166 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.258228  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:21 kubernetes-upgrade-210902 kubelet[13178]: E0108 21:18:21.447207   13178 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.258574  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:22 kubernetes-upgrade-210902 kubelet[13189]: E0108 21:18:22.197045   13189 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.258931  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:22 kubernetes-upgrade-210902 kubelet[13200]: E0108 21:18:22.946229   13200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0108 21:18:24.259294  181838 logs.go:138] Found kubelet problem: Jan 08 21:18:23 kubernetes-upgrade-210902 kubelet[13212]: E0108 21:18:23.698135   13212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.259411  181838 logs.go:123] Gathering logs for dmesg ...
	I0108 21:18:24.259430  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0108 21:18:24.292183  181838 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 21:18:24.292232  181838 out.go:239] * 
	W0108 21:18:24.292425  181838 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:18:24.292451  181838 out.go:239] * 
	W0108 21:18:24.293250  181838 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:18:24.296481  181838 out.go:177] X Problems detected in kubelet:
	I0108 21:18:24.297997  181838 out.go:177]   Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12477]: E0108 21:17:34.195356   12477 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.299429  181838 out.go:177]   Jan 08 21:17:34 kubernetes-upgrade-210902 kubelet[12488]: E0108 21:17:34.945977   12488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.301053  181838 out.go:177]   Jan 08 21:17:35 kubernetes-upgrade-210902 kubelet[12499]: E0108 21:17:35.695839   12499 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0108 21:18:24.304811  181838 out.go:177] 
	W0108 21:18:24.306808  181838 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1025-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0108 21:16:27.996505   11371 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 21:18:24.306912  181838 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 21:18:24.306980  181838 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 21:18:24.309112  181838 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:09:46 UTC, end at Sun 2023-01-08 21:18:25 UTC. --
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.743007096Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.759562617Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.759604905Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.777367207Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.777426820Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.793486181Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.793537633Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.809146909Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.809198280Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.824796362Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.824855937Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.840861591Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.840912656Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.856996928Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.857046656Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.873495341Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.873558541Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.890927725Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.890984437Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.907332160Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.907379773Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.924041196Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.924087214Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.939993036Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jan 08 21:16:27 kubernetes-upgrade-210902 containerd[500]: time="2023-01-08T21:16:27.940045635Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.959794] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.003874] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027938] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.967819] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027861] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.023928] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:18] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.009606] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023919] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.963817] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.003888] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027879] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> kernel <==
	*  21:18:25 up  1:00,  0 users,  load average: 1.02, 1.90, 1.98
	Linux kubernetes-upgrade-210902 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:09:46 UTC, end at Sun 2023-01-08 21:18:25 UTC. --
	Jan 08 21:18:22 kubernetes-upgrade-210902 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:22 kubernetes-upgrade-210902 kubelet[13200]: E0108 21:18:22.946229   13200 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 08 21:18:22 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 21:18:22 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:18:23 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Jan 08 21:18:23 kubernetes-upgrade-210902 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:23 kubernetes-upgrade-210902 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:23 kubernetes-upgrade-210902 kubelet[13212]: E0108 21:18:23.698135   13212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 08 21:18:23 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 21:18:23 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:18:24 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Jan 08 21:18:24 kubernetes-upgrade-210902 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:24 kubernetes-upgrade-210902 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:24 kubernetes-upgrade-210902 kubelet[13360]: E0108 21:18:24.461207   13360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 08 21:18:24 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 21:18:24 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:25 kubernetes-upgrade-210902 kubelet[13381]: E0108 21:18:25.201604   13381 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:18:25 kubernetes-upgrade-210902 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:18:25.857671  233547 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-210902 -n kubernetes-upgrade-210902
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-210902 -n kubernetes-upgrade-210902: exit status 2 (345.370187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-210902" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-210902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-210902
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-210902: (2.055274769s)
--- FAIL: TestKubernetesUpgrade (566.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (516.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m36.710917508s)

                                                
                                                
-- stdout --
	* [calico-210619] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-210619 in cluster calico-210619
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:11:11.169755  197178 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:11:11.169915  197178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:11:11.169927  197178 out.go:309] Setting ErrFile to fd 2...
	I0108 21:11:11.169933  197178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:11:11.170080  197178 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:11:11.170784  197178 out.go:303] Setting JSON to false
	I0108 21:11:11.172956  197178 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3220,"bootTime":1673209051,"procs":1310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:11:11.173028  197178 start.go:135] virtualization: kvm guest
	I0108 21:11:11.176041  197178 out.go:177] * [calico-210619] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:11:11.177794  197178 notify.go:220] Checking for updates...
	I0108 21:11:11.179355  197178 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:11:11.181040  197178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:11:11.183734  197178 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:11:11.185508  197178 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:11:11.188219  197178 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:11:11.190414  197178 config.go:180] Loaded profile config "cilium-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:11:11.190561  197178 config.go:180] Loaded profile config "kindnet-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:11:11.190704  197178 config.go:180] Loaded profile config "kubernetes-upgrade-210902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:11:11.190773  197178 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:11:11.224226  197178 docker.go:137] docker version: linux-20.10.22
	I0108 21:11:11.224328  197178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:11:11.330503  197178 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:65 SystemTime:2023-01-08 21:11:11.245090226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:11:11.330601  197178 docker.go:254] overlay module found
	I0108 21:11:11.333715  197178 out.go:177] * Using the docker driver based on user configuration
	I0108 21:11:11.335127  197178 start.go:294] selected driver: docker
	I0108 21:11:11.335141  197178 start.go:838] validating driver "docker" against <nil>
	I0108 21:11:11.335159  197178 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:11:11.336104  197178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:11:11.457406  197178 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:65 SystemTime:2023-01-08 21:11:11.358622962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:11:11.457515  197178 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 21:11:11.457703  197178 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:11:11.461936  197178 out.go:177] * Using Docker driver with root privileges
	I0108 21:11:11.463792  197178 cni.go:95] Creating CNI manager for "calico"
	I0108 21:11:11.463815  197178 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I0108 21:11:11.463833  197178 start_flags.go:317] config:
	{Name:calico-210619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-210619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:11:11.466122  197178 out.go:177] * Starting control plane node calico-210619 in cluster calico-210619
	I0108 21:11:11.468092  197178 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:11:11.469967  197178 out.go:177] * Pulling base image ...
	I0108 21:11:11.471653  197178 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:11:11.471714  197178 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:11:11.471729  197178 cache.go:57] Caching tarball of preloaded images
	I0108 21:11:11.471770  197178 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:11:11.472005  197178 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:11:11.472021  197178 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:11:11.472175  197178 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/config.json ...
	I0108 21:11:11.472209  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/config.json: {Name:mk103669217c9e5a068bee1bc0d09b8b43cfe654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:11.496785  197178 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:11:11.496811  197178 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:11:11.496827  197178 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:11:11.496871  197178 start.go:364] acquiring machines lock for calico-210619: {Name:mkafc32689b2fbce1d426f9d02664466f826fd86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:11:11.497016  197178 start.go:368] acquired machines lock for "calico-210619" in 119.224µs
	I0108 21:11:11.497047  197178 start.go:93] Provisioning new machine with config: &{Name:calico-210619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-210619 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:11:11.497195  197178 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:11:11.501386  197178 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0108 21:11:11.501638  197178 start.go:159] libmachine.API.Create for "calico-210619" (driver="docker")
	I0108 21:11:11.501677  197178 client.go:168] LocalClient.Create starting
	I0108 21:11:11.501759  197178 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem
	I0108 21:11:11.501806  197178 main.go:134] libmachine: Decoding PEM data...
	I0108 21:11:11.501827  197178 main.go:134] libmachine: Parsing certificate...
	I0108 21:11:11.501926  197178 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem
	I0108 21:11:11.501957  197178 main.go:134] libmachine: Decoding PEM data...
	I0108 21:11:11.501977  197178 main.go:134] libmachine: Parsing certificate...
	I0108 21:11:11.502465  197178 cli_runner.go:164] Run: docker network inspect calico-210619 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:11:11.532599  197178 cli_runner.go:211] docker network inspect calico-210619 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:11:11.532679  197178 network_create.go:272] running [docker network inspect calico-210619] to gather additional debugging logs...
	I0108 21:11:11.532706  197178 cli_runner.go:164] Run: docker network inspect calico-210619
	W0108 21:11:11.558103  197178 cli_runner.go:211] docker network inspect calico-210619 returned with exit code 1
	I0108 21:11:11.558137  197178 network_create.go:275] error running [docker network inspect calico-210619]: docker network inspect calico-210619: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-210619
	I0108 21:11:11.558151  197178 network_create.go:277] output of [docker network inspect calico-210619]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-210619
	
	** /stderr **
	I0108 21:11:11.558204  197178 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:11:11.586228  197178 network.go:244] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b55bc2878bca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d4:2d:1f:91}}
	I0108 21:11:11.587089  197178 network.go:244] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6ab3f57c56bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:58:4f:a6:4e}}
	I0108 21:11:11.588168  197178 network.go:306] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000014368] misses:0}
	I0108 21:11:11.588213  197178 network.go:239] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 21:11:11.588233  197178 network_create.go:115] attempt to create docker network calico-210619 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0108 21:11:11.588286  197178 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-210619 calico-210619
	I0108 21:11:11.656599  197178 network_create.go:99] docker network calico-210619 192.168.67.0/24 created
	I0108 21:11:11.656637  197178 kic.go:106] calculated static IP "192.168.67.2" for the "calico-210619" container
	I0108 21:11:11.656723  197178 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:11:11.691643  197178 cli_runner.go:164] Run: docker volume create calico-210619 --label name.minikube.sigs.k8s.io=calico-210619 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:11:11.722367  197178 oci.go:103] Successfully created a docker volume calico-210619
	I0108 21:11:11.722440  197178 cli_runner.go:164] Run: docker run --rm --name calico-210619-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-210619 --entrypoint /usr/bin/test -v calico-210619:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 21:11:15.427206  197178 cli_runner.go:217] Completed: docker run --rm --name calico-210619-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-210619 --entrypoint /usr/bin/test -v calico-210619:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib: (3.704717244s)
	I0108 21:11:15.427237  197178 oci.go:107] Successfully prepared a docker volume calico-210619
	I0108 21:11:15.427272  197178 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:11:15.427295  197178 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 21:11:15.427359  197178 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-210619:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:11:19.627937  197178 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-210619:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (4.200517232s)
	I0108 21:11:19.627965  197178 kic.go:188] duration metric: took 4.200668 seconds to extract preloaded images to volume
	W0108 21:11:19.628076  197178 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:11:19.628156  197178 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:11:19.738044  197178 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-210619 --name calico-210619 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-210619 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-210619 --network calico-210619 --ip 192.168.67.2 --volume calico-210619:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 21:11:20.157793  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Running}}
	I0108 21:11:20.185484  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Status}}
	I0108 21:11:20.211041  197178 cli_runner.go:164] Run: docker exec calico-210619 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:11:20.261491  197178 oci.go:144] the created container "calico-210619" has a running status.
	I0108 21:11:20.261523  197178 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa...
	I0108 21:11:20.416718  197178 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:11:20.502344  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Status}}
	I0108 21:11:20.531187  197178 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:11:20.531215  197178 kic_runner.go:114] Args: [docker exec --privileged calico-210619 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:11:20.606327  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Status}}
	I0108 21:11:20.630364  197178 machine.go:88] provisioning docker machine ...
	I0108 21:11:20.630395  197178 ubuntu.go:169] provisioning hostname "calico-210619"
	I0108 21:11:20.630450  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:20.660102  197178 main.go:134] libmachine: Using SSH client type: native
	I0108 21:11:20.660372  197178 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0108 21:11:20.660402  197178 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-210619 && echo "calico-210619" | sudo tee /etc/hostname
	I0108 21:11:20.792580  197178 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-210619
	
	I0108 21:11:20.792660  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:20.818604  197178 main.go:134] libmachine: Using SSH client type: native
	I0108 21:11:20.818796  197178 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 32997 <nil> <nil>}
	I0108 21:11:20.818826  197178 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-210619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-210619/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-210619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:11:20.939115  197178 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:11:20.939148  197178 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:11:20.939184  197178 ubuntu.go:177] setting up certificates
	I0108 21:11:20.939200  197178 provision.go:83] configureAuth start
	I0108 21:11:20.939253  197178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-210619
	I0108 21:11:20.963066  197178 provision.go:138] copyHostCerts
	I0108 21:11:20.963126  197178 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:11:20.963136  197178 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:11:20.963212  197178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:11:20.963316  197178 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:11:20.963327  197178 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:11:20.963371  197178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:11:20.963447  197178 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:11:20.963456  197178 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:11:20.963571  197178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:11:20.963656  197178 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.calico-210619 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-210619]
	I0108 21:11:21.163907  197178 provision.go:172] copyRemoteCerts
	I0108 21:11:21.163966  197178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:11:21.164011  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:21.194738  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:21.278862  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:11:21.296426  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 21:11:21.313234  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:11:21.330130  197178 provision.go:86] duration metric: configureAuth took 390.916861ms
	I0108 21:11:21.330169  197178 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:11:21.330332  197178 config.go:180] Loaded profile config "calico-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:11:21.330344  197178 machine.go:91] provisioned docker machine in 699.960379ms
	I0108 21:11:21.330352  197178 client.go:171] LocalClient.Create took 9.82866604s
	I0108 21:11:21.330373  197178 start.go:167] duration metric: libmachine.API.Create for "calico-210619" took 9.828734011s
	I0108 21:11:21.330385  197178 start.go:300] post-start starting for "calico-210619" (driver="docker")
	I0108 21:11:21.330396  197178 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:11:21.330454  197178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:11:21.330505  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:21.354546  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:21.443208  197178 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:11:21.445908  197178 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:11:21.445933  197178 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:11:21.445944  197178 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:11:21.445949  197178 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:11:21.445958  197178 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:11:21.446009  197178 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:11:21.446071  197178 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:11:21.446161  197178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:11:21.452780  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:11:21.470027  197178 start.go:303] post-start completed in 139.628486ms
	I0108 21:11:21.470380  197178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-210619
	I0108 21:11:21.494541  197178 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/config.json ...
	I0108 21:11:21.494812  197178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:11:21.494861  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:21.518842  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:21.599932  197178 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:11:21.603830  197178 start.go:128] duration metric: createHost completed in 10.10662248s
	I0108 21:11:21.603854  197178 start.go:83] releasing machines lock for "calico-210619", held for 10.106820658s
	I0108 21:11:21.603945  197178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-210619
	I0108 21:11:21.626761  197178 ssh_runner.go:195] Run: cat /version.json
	I0108 21:11:21.626814  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:21.626836  197178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:11:21.626920  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:21.652317  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:21.653141  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:21.761995  197178 ssh_runner.go:195] Run: systemctl --version
	I0108 21:11:21.765781  197178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:11:21.775270  197178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:11:21.784176  197178 docker.go:189] disabling docker service ...
	I0108 21:11:21.784228  197178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:11:21.799452  197178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:11:21.808115  197178 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:11:21.897557  197178 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:11:21.977723  197178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:11:21.987005  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:11:21.999275  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:11:22.006920  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:11:22.014470  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:11:22.021929  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0108 21:11:22.029384  197178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:11:22.035498  197178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:11:22.041498  197178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:11:22.114174  197178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:11:22.191000  197178 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:11:22.191080  197178 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:11:22.194549  197178 start.go:472] Will wait 60s for crictl version
	I0108 21:11:22.194601  197178 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:11:22.220236  197178 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:11:22.220304  197178 ssh_runner.go:195] Run: containerd --version
	I0108 21:11:22.243033  197178 ssh_runner.go:195] Run: containerd --version
	I0108 21:11:22.270778  197178 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:11:22.272319  197178 cli_runner.go:164] Run: docker network inspect calico-210619 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:11:22.293773  197178 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:11:22.297051  197178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:11:22.306047  197178 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:11:22.306113  197178 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:11:22.332433  197178 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:11:22.332451  197178 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:11:22.332490  197178 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:11:22.355567  197178 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:11:22.355588  197178 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:11:22.355622  197178 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:11:22.378873  197178 cni.go:95] Creating CNI manager for "calico"
	I0108 21:11:22.378897  197178 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:11:22.378912  197178 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-210619 NodeName:calico-210619 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:11:22.379036  197178 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-210619"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:11:22.379114  197178 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-210619 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-210619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0108 21:11:22.379162  197178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:11:22.386279  197178 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:11:22.386336  197178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:11:22.393208  197178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I0108 21:11:22.405630  197178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:11:22.418885  197178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes)
	I0108 21:11:22.434171  197178 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:11:22.437244  197178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:11:22.446377  197178 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619 for IP: 192.168.67.2
	I0108 21:11:22.446475  197178 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:11:22.446512  197178 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:11:22.446568  197178 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/client.key
	I0108 21:11:22.446582  197178 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/client.crt with IP's: []
	I0108 21:11:22.606379  197178 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/client.crt ...
	I0108 21:11:22.606408  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/client.crt: {Name:mk04d9c39de9b63c1d13653bd42068e6e55036f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:22.606614  197178 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/client.key ...
	I0108 21:11:22.606629  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/client.key: {Name:mkbc57139929b20e41b8775aeded96130a34a27d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:22.606717  197178 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.key.c7fa3a9e
	I0108 21:11:22.606735  197178 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:11:22.792190  197178 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.crt.c7fa3a9e ...
	I0108 21:11:22.792226  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.crt.c7fa3a9e: {Name:mkc32b697d948dac09546fd54f179f2d835af060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:22.792456  197178 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.key.c7fa3a9e ...
	I0108 21:11:22.792472  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.key.c7fa3a9e: {Name:mkaa9acfe0c204199255edf485a9b4021cf00155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:22.792586  197178 certs.go:320] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.crt
	I0108 21:11:22.792658  197178 certs.go:324] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.key
	I0108 21:11:22.792701  197178 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.key
	I0108 21:11:22.792713  197178 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.crt with IP's: []
	I0108 21:11:22.964131  197178 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.crt ...
	I0108 21:11:22.964152  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.crt: {Name:mk19c3be1202facfb44468d8045d81ac6f41f5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:22.964365  197178 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.key ...
	I0108 21:11:22.964382  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.key: {Name:mkbcd952379393036f58945f877a488f39b8df67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:22.964586  197178 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:11:22.964619  197178 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:11:22.964629  197178 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:11:22.964650  197178 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:11:22.964703  197178 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:11:22.964731  197178 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:11:22.964767  197178 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:11:22.965302  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:11:22.984776  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:11:23.002957  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:11:23.020257  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:11:23.037977  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:11:23.056460  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:11:23.074682  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:11:23.092916  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:11:23.115714  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:11:23.136753  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:11:23.166431  197178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:11:23.186378  197178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:11:23.198562  197178 ssh_runner.go:195] Run: openssl version
	I0108 21:11:23.203432  197178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:11:23.210956  197178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:11:23.214281  197178 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:11:23.214337  197178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:11:23.220545  197178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:11:23.227939  197178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:11:23.234989  197178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:11:23.237899  197178 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:11:23.237939  197178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:11:23.242740  197178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:11:23.251423  197178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:11:23.258688  197178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:11:23.261608  197178 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:11:23.261649  197178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:11:23.266170  197178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:11:23.275745  197178 kubeadm.go:396] StartCluster: {Name:calico-210619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-210619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:11:23.275851  197178 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:11:23.275891  197178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:11:23.301702  197178 cri.go:87] found id: ""
	I0108 21:11:23.301761  197178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:11:23.308964  197178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:11:23.315905  197178 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:11:23.315956  197178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:11:23.322490  197178 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:11:23.322533  197178 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:11:23.372279  197178 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:11:23.372370  197178 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:11:23.400488  197178 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:11:23.400572  197178 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:11:23.400613  197178 kubeadm.go:317] OS: Linux
	I0108 21:11:23.400662  197178 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:11:23.400715  197178 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:11:23.400767  197178 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:11:23.400822  197178 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:11:23.400874  197178 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:11:23.400937  197178 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:11:23.400990  197178 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:11:23.401045  197178 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:11:23.401098  197178 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:11:23.469795  197178 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:11:23.469971  197178 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:11:23.470098  197178 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:11:23.589234  197178 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:11:23.593759  197178 out.go:204]   - Generating certificates and keys ...
	I0108 21:11:23.593892  197178 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:11:23.593971  197178 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:11:23.953265  197178 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:11:24.101921  197178 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:11:24.246681  197178 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:11:24.528562  197178 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 21:11:24.755786  197178 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 21:11:24.756193  197178 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-210619 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 21:11:24.884112  197178 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 21:11:24.884260  197178 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-210619 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 21:11:24.986260  197178 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:11:25.353572  197178 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:11:25.505014  197178 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 21:11:25.505131  197178 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:11:25.827283  197178 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:11:26.143504  197178 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:11:26.273942  197178 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:11:26.451382  197178 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:11:26.463504  197178 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:11:26.464455  197178 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:11:26.464549  197178 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:11:26.545253  197178 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:11:26.547829  197178 out.go:204]   - Booting up control plane ...
	I0108 21:11:26.548004  197178 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:11:26.548098  197178 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:11:26.550680  197178 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:11:26.551604  197178 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:11:26.553700  197178 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:11:33.056411  197178 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502691 seconds
	I0108 21:11:33.056563  197178 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:11:33.064125  197178 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:11:33.578759  197178 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:11:33.578903  197178 kubeadm.go:317] [mark-control-plane] Marking the node calico-210619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:11:34.086299  197178 kubeadm.go:317] [bootstrap-token] Using token: 73fvgn.ak71lphbiowyncaa
	I0108 21:11:34.088219  197178 out.go:204]   - Configuring RBAC rules ...
	I0108 21:11:34.088359  197178 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:11:34.091128  197178 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:11:34.095422  197178 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:11:34.099585  197178 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:11:34.101461  197178 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:11:34.103434  197178 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:11:34.110306  197178 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:11:34.318675  197178 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:11:34.512939  197178 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:11:34.514083  197178 kubeadm.go:317] 
	I0108 21:11:34.514148  197178 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:11:34.514154  197178 kubeadm.go:317] 
	I0108 21:11:34.514236  197178 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:11:34.514243  197178 kubeadm.go:317] 
	I0108 21:11:34.514270  197178 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:11:34.514335  197178 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:11:34.514392  197178 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:11:34.514398  197178 kubeadm.go:317] 
	I0108 21:11:34.514458  197178 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:11:34.514465  197178 kubeadm.go:317] 
	I0108 21:11:34.514516  197178 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:11:34.514523  197178 kubeadm.go:317] 
	I0108 21:11:34.514581  197178 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:11:34.514665  197178 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:11:34.514741  197178 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:11:34.514749  197178 kubeadm.go:317] 
	I0108 21:11:34.514840  197178 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:11:34.514930  197178 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:11:34.514936  197178 kubeadm.go:317] 
	I0108 21:11:34.515025  197178 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 73fvgn.ak71lphbiowyncaa \
	I0108 21:11:34.515131  197178 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:11:34.515153  197178 kubeadm.go:317] 	--control-plane 
	I0108 21:11:34.515166  197178 kubeadm.go:317] 
	I0108 21:11:34.515258  197178 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:11:34.515265  197178 kubeadm.go:317] 
	I0108 21:11:34.515356  197178 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 73fvgn.ak71lphbiowyncaa \
	I0108 21:11:34.515508  197178 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:11:34.518396  197178 kubeadm.go:317] W0108 21:11:23.363877     741 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:11:34.518653  197178 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:11:34.518811  197178 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:11:34.518847  197178 cni.go:95] Creating CNI manager for "calico"
	I0108 21:11:34.520757  197178 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0108 21:11:34.522410  197178 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:11:34.522429  197178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I0108 21:11:34.538057  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:11:35.763296  197178 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.22520122s)
	I0108 21:11:35.763353  197178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:11:35.763462  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:35.763499  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=calico-210619 minikube.k8s.io/updated_at=2023_01_08T21_11_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:35.846608  197178 ops.go:34] apiserver oom_adj: -16
	I0108 21:11:35.846709  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:36.419080  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:36.919194  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:37.418677  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:37.918708  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:38.418849  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:38.919499  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:39.419290  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:39.919378  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:40.419399  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:40.918627  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:41.419335  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:41.919545  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:42.418633  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:42.918865  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:43.418688  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:43.919371  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:44.419286  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:44.919227  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:45.419415  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:45.919271  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:46.419455  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:46.919027  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:11:46.981252  197178 kubeadm.go:1067] duration metric: took 11.21784212s to wait for elevateKubeSystemPrivileges.
	I0108 21:11:46.981283  197178 kubeadm.go:398] StartCluster complete in 23.705546286s
	I0108 21:11:46.981302  197178 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:46.981406  197178 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:11:46.982611  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:11:47.517223  197178 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-210619" rescaled to 1
	I0108 21:11:47.517416  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:11:47.517688  197178 config.go:180] Loaded profile config "calico-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:11:47.517800  197178 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 21:11:47.517847  197178 addons.go:65] Setting storage-provisioner=true in profile "calico-210619"
	I0108 21:11:47.517873  197178 addons.go:227] Setting addon storage-provisioner=true in "calico-210619"
	W0108 21:11:47.517881  197178 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:11:47.517877  197178 addons.go:65] Setting default-storageclass=true in profile "calico-210619"
	I0108 21:11:47.517906  197178 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-210619"
	I0108 21:11:47.517930  197178 host.go:66] Checking if "calico-210619" exists ...
	I0108 21:11:47.518263  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Status}}
	I0108 21:11:47.518378  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Status}}
	I0108 21:11:47.517352  197178 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:11:47.522400  197178 out.go:177] * Verifying Kubernetes components...
	I0108 21:11:47.524650  197178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:11:47.567725  197178 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:11:47.565218  197178 addons.go:227] Setting addon default-storageclass=true in "calico-210619"
	W0108 21:11:47.569530  197178 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:11:47.569565  197178 host.go:66] Checking if "calico-210619" exists ...
	I0108 21:11:47.569668  197178 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:11:47.569697  197178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:11:47.569769  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:47.569977  197178 cli_runner.go:164] Run: docker container inspect calico-210619 --format={{.State.Status}}
	I0108 21:11:47.605290  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:47.608567  197178 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:11:47.608590  197178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:11:47.608637  197178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210619
	I0108 21:11:47.650048  197178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32997 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210619/id_rsa Username:docker}
	I0108 21:11:47.731705  197178 node_ready.go:35] waiting up to 5m0s for node "calico-210619" to be "Ready" ...
	I0108 21:11:47.732020  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:11:47.736169  197178 node_ready.go:49] node "calico-210619" has status "Ready":"True"
	I0108 21:11:47.736238  197178 node_ready.go:38] duration metric: took 4.489145ms waiting for node "calico-210619" to be "Ready" ...
	I0108 21:11:47.736268  197178 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:11:47.747776  197178 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace to be "Ready" ...
	I0108 21:11:47.835218  197178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:11:47.910416  197178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:11:49.433709  197178 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.701651565s)
	I0108 21:11:49.433737  197178 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:11:49.465056  197178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.629798665s)
	I0108 21:11:49.538427  197178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.627923836s)
	I0108 21:11:49.540367  197178 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 21:11:49.541799  197178 addons.go:488] enableAddons completed in 2.023993908s
	I0108 21:11:49.760533  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:51.761216  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:53.773008  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:56.261231  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:11:58.760050  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:00.762606  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:03.260944  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:05.760616  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:08.259904  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:10.260415  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:12.260717  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:14.760978  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:16.761947  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:19.260368  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:21.260955  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:23.760193  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:26.259881  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:28.260611  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:30.759920  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:32.760967  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:35.260542  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:37.260800  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:39.759673  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:41.760532  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:44.259903  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:46.260146  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:48.260510  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:50.260605  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:52.761258  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:55.260986  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:57.760494  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:12:59.761097  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:02.260296  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:04.260388  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:06.760313  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:09.260256  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:11.763874  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:14.260795  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:16.261448  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:18.760312  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:21.259989  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:23.260086  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:25.260910  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:27.261094  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:29.759810  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:31.760432  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:34.259680  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:36.260714  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:38.760852  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:41.260560  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:43.760444  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:45.760830  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:48.260044  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:50.260311  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:52.759872  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:54.760547  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:56.760942  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:13:59.260238  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:01.759947  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:03.760067  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:05.760630  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:08.261268  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:10.760604  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:13.260530  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:15.760003  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:17.760847  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:20.259734  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:22.759685  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:24.760280  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:27.260100  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:29.261283  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:31.760778  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:33.760930  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:36.259418  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:38.260249  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:40.760365  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:43.260226  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:45.760166  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:47.760225  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:50.260237  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:52.260658  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:54.759950  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:56.761970  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:14:59.260205  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:01.760508  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:04.260175  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:06.260713  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:08.261203  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:10.262292  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:12.759652  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:14.759911  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:16.760336  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:19.260224  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:21.759829  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:23.760636  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:25.760800  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:28.260216  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:30.760110  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:33.259329  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:35.260623  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:37.760582  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:40.260438  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:42.760414  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:45.260032  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:47.760544  197178 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:47.764619  197178 pod_ready.go:81] duration metric: took 4m0.0168111s waiting for pod "calico-kube-controllers-7df895d496-r6ghs" in "kube-system" namespace to be "Ready" ...
	E0108 21:15:47.764643  197178 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0108 21:15:47.764653  197178 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-94vvb" in "kube-system" namespace to be "Ready" ...
	I0108 21:15:49.775028  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:51.775816  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:54.274676  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:56.275250  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:15:58.275946  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:00.276006  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:02.774818  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:04.775087  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:07.275002  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:09.275313  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:11.775293  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:13.775560  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:15.775908  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:18.275457  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:20.777623  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:23.275310  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:25.276055  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:27.775202  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:30.275307  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:32.774913  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:34.775684  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:36.776386  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:39.274564  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:41.275106  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:43.275332  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:45.775266  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:48.274572  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:50.275851  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:52.774873  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:54.775151  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:57.275463  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:59.774447  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:01.775447  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:03.776272  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:06.274775  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:08.775799  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:11.275020  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:13.275599  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:15.775757  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:18.275287  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:20.775550  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:23.274742  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:25.275400  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:27.775025  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:29.775298  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:31.775866  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:34.275210  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:36.275422  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:38.774878  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:40.775011  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:42.776052  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:45.275343  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:47.275620  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:49.775458  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:52.275419  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:54.775432  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:57.275095  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:59.775231  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:02.274708  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:04.775148  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:06.775928  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:09.275384  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:11.779032  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:14.275256  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:16.275823  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:18.774724  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:20.775303  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:23.274929  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:25.275809  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:27.775079  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:29.776029  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:32.275072  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:34.775667  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:37.275807  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:39.774605  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:41.774702  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:43.776908  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:46.275922  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:48.775429  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:51.276819  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:53.774805  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:55.776193  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:18:58.276280  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:00.276841  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:02.775651  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:05.275338  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:07.276054  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:09.774823  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:11.775317  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:13.776357  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:16.275647  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:18.276398  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:20.775479  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:22.776251  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:25.278039  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:27.775398  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:29.775967  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:32.274473  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:34.275413  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:36.275770  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:38.775875  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:41.275334  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:43.276052  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:45.774896  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:47.775344  197178 pod_ready.go:102] pod "calico-node-94vvb" in "kube-system" namespace has status "Ready":"False"
	I0108 21:19:47.780790  197178 pod_ready.go:81] duration metric: took 4m0.016125518s waiting for pod "calico-node-94vvb" in "kube-system" namespace to be "Ready" ...
	E0108 21:19:47.780815  197178 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0108 21:19:47.780835  197178 pod_ready.go:38] duration metric: took 8m0.044534972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:19:47.783564  197178 out.go:177] 
	W0108 21:19:47.785213  197178 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0108 21:19:47.785235  197178 out.go:239] * 
	* 
	W0108 21:19:47.786481  197178 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:19:47.787681  197178 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (516.73s)
E0108 21:28:21.725177   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:28:36.692010   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:29:04.374250   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:30:15.378186   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:30:39.156209   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:30:50.301064   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:30:56.112274   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (351.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135405751s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155938871s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136565723s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:13:59.155364   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142080172s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129399048s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129482396s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132545746s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132914351s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:15:50.301428   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.306677   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.316916   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.337187   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.377449   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.457741   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.618115   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:50.938662   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:51.579596   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:52.860709   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:15:55.421470   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:15:56.111435   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:16:00.541592   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133441742s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:16:10.782047   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120262133s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:17:39.301404   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.306677   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.316943   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.337209   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.377525   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.457807   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.618215   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:39.938783   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:40.172215   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:17:40.579857   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:41.860672   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:44.421024   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133214611s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:17:49.541902   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:17:57.125836   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:17:59.782563   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131743246s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (351.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (360.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128562639s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124792293s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123609646s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13460976s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125395363s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:15:15.378802   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129434612s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127507216s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126322763s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:16:31.262430   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13259377s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:16:59.210683   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.215955   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.226205   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.246492   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.286768   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.367179   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.527575   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:16:59.847673   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:17:00.488552   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:17:01.768902   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:17:04.329094   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:17:09.449956   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:17:12.222650   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:17:19.691032   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129078764s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:18:20.263590   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:18:21.132808   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121499838s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 21:18:34.143307   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default
E0108 21:19:43.054124   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-210619 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126174881s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (360.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (279.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-211828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-211828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (4m37.623965713s)

                                                
                                                
-- stdout --
	* [old-k8s-version-211828] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node old-k8s-version-211828 in cluster old-k8s-version-211828
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.6.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:18:28.558391  234278 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:18:28.558506  234278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:28.558515  234278 out.go:309] Setting ErrFile to fd 2...
	I0108 21:18:28.558520  234278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:28.558619  234278 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:18:28.559240  234278 out.go:303] Setting JSON to false
	I0108 21:18:28.560586  234278 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3658,"bootTime":1673209051,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:18:28.560657  234278 start.go:135] virtualization: kvm guest
	I0108 21:18:28.563554  234278 out.go:177] * [old-k8s-version-211828] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:18:28.565345  234278 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:18:28.565269  234278 notify.go:220] Checking for updates...
	I0108 21:18:28.568530  234278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:18:28.570291  234278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:18:28.571990  234278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:18:28.573526  234278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:18:28.575313  234278 config.go:180] Loaded profile config "bridge-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:18:28.575408  234278 config.go:180] Loaded profile config "calico-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:18:28.575529  234278 config.go:180] Loaded profile config "enable-default-cni-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:18:28.575580  234278 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:18:28.607955  234278 docker.go:137] docker version: linux-20.10.22
	I0108 21:18:28.608052  234278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:28.705524  234278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:18:28.629701181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:28.705616  234278 docker.go:254] overlay module found
	I0108 21:18:28.708414  234278 out.go:177] * Using the docker driver based on user configuration
	I0108 21:18:28.710056  234278 start.go:294] selected driver: docker
	I0108 21:18:28.710067  234278 start.go:838] validating driver "docker" against <nil>
	I0108 21:18:28.710088  234278 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:18:28.710962  234278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:28.808256  234278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:18:28.731896818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:28.808365  234278 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 21:18:28.808526  234278 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:18:28.810723  234278 out.go:177] * Using Docker driver with root privileges
	I0108 21:18:28.812101  234278 cni.go:95] Creating CNI manager for ""
	I0108 21:18:28.812121  234278 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:18:28.812135  234278 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0108 21:18:28.812141  234278 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0108 21:18:28.812146  234278 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:18:28.812162  234278 start_flags.go:317] config:
	{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:18:28.813838  234278 out.go:177] * Starting control plane node old-k8s-version-211828 in cluster old-k8s-version-211828
	I0108 21:18:28.815159  234278 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:18:28.816607  234278 out.go:177] * Pulling base image ...
	I0108 21:18:28.818172  234278 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:18:28.818197  234278 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:18:28.818208  234278 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 21:18:28.818218  234278 cache.go:57] Caching tarball of preloaded images
	I0108 21:18:28.818416  234278 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:18:28.818429  234278 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0108 21:18:28.818520  234278 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:18:28.818543  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json: {Name:mkc775b2e93c04e3984015868b295e2478c06bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:28.842945  234278 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:18:28.842968  234278 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:18:28.842986  234278 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:18:28.843021  234278 start.go:364] acquiring machines lock for old-k8s-version-211828: {Name:mk7415b788fbdcf6791633774a550ddef2131776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:28.843137  234278 start.go:368] acquired machines lock for "old-k8s-version-211828" in 97.079µs
	I0108 21:18:28.843159  234278 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:18:28.843241  234278 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:18:28.846197  234278 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 21:18:28.846428  234278 start.go:159] libmachine.API.Create for "old-k8s-version-211828" (driver="docker")
	I0108 21:18:28.846472  234278 client.go:168] LocalClient.Create starting
	I0108 21:18:28.846535  234278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem
	I0108 21:18:28.846566  234278 main.go:134] libmachine: Decoding PEM data...
	I0108 21:18:28.846582  234278 main.go:134] libmachine: Parsing certificate...
	I0108 21:18:28.846635  234278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem
	I0108 21:18:28.846655  234278 main.go:134] libmachine: Decoding PEM data...
	I0108 21:18:28.846664  234278 main.go:134] libmachine: Parsing certificate...
	I0108 21:18:28.846942  234278 cli_runner.go:164] Run: docker network inspect old-k8s-version-211828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:18:28.869887  234278 cli_runner.go:211] docker network inspect old-k8s-version-211828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:18:28.869981  234278 network_create.go:272] running [docker network inspect old-k8s-version-211828] to gather additional debugging logs...
	I0108 21:18:28.870004  234278 cli_runner.go:164] Run: docker network inspect old-k8s-version-211828
	W0108 21:18:28.894530  234278 cli_runner.go:211] docker network inspect old-k8s-version-211828 returned with exit code 1
	I0108 21:18:28.894560  234278 network_create.go:275] error running [docker network inspect old-k8s-version-211828]: docker network inspect old-k8s-version-211828: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-211828
	I0108 21:18:28.894572  234278 network_create.go:277] output of [docker network inspect old-k8s-version-211828]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-211828
	
	** /stderr **
	I0108 21:18:28.894631  234278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:18:28.919380  234278 network.go:244] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b55bc2878bca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d4:2d:1f:91}}
	I0108 21:18:28.920305  234278 network.go:244] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6ab3f57c56bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:58:4f:a6:4e}}
	I0108 21:18:28.920862  234278 network.go:244] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-c9c7b4f8f7ef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c7:bc:cf:86}}
	I0108 21:18:28.921666  234278 network.go:306] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000602000] misses:0}
	I0108 21:18:28.921710  234278 network.go:239] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 21:18:28.921722  234278 network_create.go:115] attempt to create docker network old-k8s-version-211828 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0108 21:18:28.921770  234278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-211828 old-k8s-version-211828
	I0108 21:18:28.983652  234278 network_create.go:99] docker network old-k8s-version-211828 192.168.76.0/24 created
	I0108 21:18:28.983684  234278 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-211828" container
	I0108 21:18:28.983758  234278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:18:29.008904  234278 cli_runner.go:164] Run: docker volume create old-k8s-version-211828 --label name.minikube.sigs.k8s.io=old-k8s-version-211828 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:18:29.035287  234278 oci.go:103] Successfully created a docker volume old-k8s-version-211828
	I0108 21:18:29.035357  234278 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-211828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-211828 --entrypoint /usr/bin/test -v old-k8s-version-211828:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 21:18:29.640275  234278 oci.go:107] Successfully prepared a docker volume old-k8s-version-211828
	I0108 21:18:29.640308  234278 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:18:29.640330  234278 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 21:18:29.640381  234278 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-211828:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:18:34.807358  234278 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-211828:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (5.166892201s)
	I0108 21:18:34.807392  234278 kic.go:188] duration metric: took 5.167059 seconds to extract preloaded images to volume
	W0108 21:18:34.807587  234278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:18:34.807698  234278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:18:34.908341  234278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-211828 --name old-k8s-version-211828 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-211828 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-211828 --network old-k8s-version-211828 --ip 192.168.76.2 --volume old-k8s-version-211828:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 21:18:35.305076  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Running}}
	I0108 21:18:35.332607  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:18:35.358417  234278 cli_runner.go:164] Run: docker exec old-k8s-version-211828 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:18:35.409975  234278 oci.go:144] the created container "old-k8s-version-211828" has a running status.
	I0108 21:18:35.410006  234278 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa...
	I0108 21:18:35.550480  234278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:18:35.638316  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:18:35.665884  234278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:18:35.665917  234278 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-211828 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:18:35.750954  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:18:35.781317  234278 machine.go:88] provisioning docker machine ...
	I0108 21:18:35.781359  234278 ubuntu.go:169] provisioning hostname "old-k8s-version-211828"
	I0108 21:18:35.781424  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:35.808120  234278 main.go:134] libmachine: Using SSH client type: native
	I0108 21:18:35.808338  234278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33012 <nil> <nil>}
	I0108 21:18:35.808360  234278 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-211828 && echo "old-k8s-version-211828" | sudo tee /etc/hostname
	I0108 21:18:35.940581  234278 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-211828
	
	I0108 21:18:35.940658  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:35.967935  234278 main.go:134] libmachine: Using SSH client type: native
	I0108 21:18:35.968100  234278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33012 <nil> <nil>}
	I0108 21:18:35.968123  234278 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-211828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-211828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-211828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:18:36.083189  234278 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:18:36.083220  234278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:18:36.083246  234278 ubuntu.go:177] setting up certificates
	I0108 21:18:36.083254  234278 provision.go:83] configureAuth start
	I0108 21:18:36.083299  234278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:18:36.107866  234278 provision.go:138] copyHostCerts
	I0108 21:18:36.107921  234278 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:18:36.107932  234278 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:18:36.108004  234278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:18:36.108100  234278 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:18:36.108113  234278 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:18:36.108158  234278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:18:36.108233  234278 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:18:36.108243  234278 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:18:36.108281  234278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:18:36.108351  234278 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-211828 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-211828]
	I0108 21:18:36.228329  234278 provision.go:172] copyRemoteCerts
	I0108 21:18:36.228399  234278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:18:36.228452  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:36.253330  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:18:36.338672  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:18:36.356028  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:18:36.373050  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:18:36.389800  234278 provision.go:86] duration metric: configureAuth took 306.535535ms
	I0108 21:18:36.389821  234278 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:18:36.389982  234278 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:18:36.389991  234278 machine.go:91] provisioned docker machine in 608.64983ms
	I0108 21:18:36.389996  234278 client.go:171] LocalClient.Create took 7.543517721s
	I0108 21:18:36.390010  234278 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-211828" took 7.543585763s
	I0108 21:18:36.390019  234278 start.go:300] post-start starting for "old-k8s-version-211828" (driver="docker")
	I0108 21:18:36.390024  234278 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:18:36.390065  234278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:18:36.390100  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:36.414354  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:18:36.498763  234278 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:18:36.501267  234278 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:18:36.501298  234278 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:18:36.501315  234278 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:18:36.501326  234278 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:18:36.501341  234278 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:18:36.501391  234278 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:18:36.501486  234278 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:18:36.501581  234278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:18:36.508018  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:18:36.525290  234278 start.go:303] post-start completed in 135.257275ms
	I0108 21:18:36.525697  234278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:18:36.549894  234278 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:18:36.550158  234278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:18:36.550205  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:36.574703  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:18:36.655788  234278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:18:36.659636  234278 start.go:128] duration metric: createHost completed in 7.816384509s
	I0108 21:18:36.659669  234278 start.go:83] releasing machines lock for "old-k8s-version-211828", held for 7.816515765s
	I0108 21:18:36.659739  234278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:18:36.683425  234278 ssh_runner.go:195] Run: cat /version.json
	I0108 21:18:36.683517  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:36.683526  234278 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 21:18:36.683588  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:18:36.711051  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:18:36.711047  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:18:36.811742  234278 ssh_runner.go:195] Run: systemctl --version
	I0108 21:18:36.815355  234278 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:18:36.824920  234278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:18:36.833545  234278 docker.go:189] disabling docker service ...
	I0108 21:18:36.833582  234278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:18:36.849146  234278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:18:36.857864  234278 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:18:36.939764  234278 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:18:37.019745  234278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:18:37.028637  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:18:37.040799  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0108 21:18:37.048568  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:18:37.056508  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:18:37.064314  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:18:37.071924  234278 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:18:37.078326  234278 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:18:37.084493  234278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:18:37.155166  234278 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:18:37.220373  234278 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:18:37.220446  234278 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:18:37.224355  234278 start.go:472] Will wait 60s for crictl version
	I0108 21:18:37.224413  234278 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:18:37.250378  234278 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:18:37.250433  234278 ssh_runner.go:195] Run: containerd --version
	I0108 21:18:37.273616  234278 ssh_runner.go:195] Run: containerd --version
	I0108 21:18:37.299866  234278 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.10 ...
	I0108 21:18:37.301524  234278 cli_runner.go:164] Run: docker network inspect old-k8s-version-211828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:18:37.325514  234278 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0108 21:18:37.328737  234278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:18:37.340233  234278 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0108 21:18:37.341848  234278 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:18:37.341902  234278 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:18:37.365146  234278 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:18:37.365164  234278 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:18:37.365209  234278 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:18:37.388397  234278 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:18:37.388414  234278 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:18:37.388464  234278 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:18:37.434022  234278 cni.go:95] Creating CNI manager for ""
	I0108 21:18:37.434048  234278 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:18:37.434077  234278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:18:37.434097  234278 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-211828 NodeName:old-k8s-version-211828 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:18:37.434247  234278 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-211828"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-211828
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:18:37.434366  234278 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-211828 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:18:37.434431  234278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 21:18:37.443126  234278 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:18:37.443196  234278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:18:37.451155  234278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0108 21:18:37.465290  234278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:18:37.479049  234278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0108 21:18:37.493777  234278 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:18:37.497077  234278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:18:37.507728  234278 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828 for IP: 192.168.76.2
	I0108 21:18:37.507857  234278 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:18:37.507909  234278 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:18:37.507967  234278 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.key
	I0108 21:18:37.507984  234278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.crt with IP's: []
	I0108 21:18:37.591364  234278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.crt ...
	I0108 21:18:37.591393  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.crt: {Name:mkec97e7a44569e96246069830aa4b193328c55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:37.591638  234278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.key ...
	I0108 21:18:37.591658  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.key: {Name:mk1f3e106da7d2aaebff948655f3d6aed6cdba99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:37.591790  234278 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25
	I0108 21:18:37.591810  234278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:18:37.652297  234278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt.31bdca25 ...
	I0108 21:18:37.652326  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt.31bdca25: {Name:mk7dac2c7fa7808f1fe48053050734032aee27a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:37.652501  234278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25 ...
	I0108 21:18:37.652513  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25: {Name:mk9d95adffbc8fcc24ce926443e1e7763aa2c465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:37.652595  234278 certs.go:320] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt
	I0108 21:18:37.652658  234278 certs.go:324] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key
	I0108 21:18:37.652705  234278 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key
	I0108 21:18:37.652718  234278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt with IP's: []
	I0108 21:18:37.791396  234278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt ...
	I0108 21:18:37.791421  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt: {Name:mk5d25c6a35b74a55098e36ce21997fbd5c70e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:37.791625  234278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key ...
	I0108 21:18:37.791639  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key: {Name:mk5d934922e01f8b02bffa3bdbe68991319c3424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:37.791812  234278 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:18:37.791845  234278 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:18:37.791860  234278 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:18:37.791882  234278 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:18:37.791912  234278 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:18:37.791934  234278 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:18:37.791973  234278 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:18:37.792530  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:18:37.810213  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:18:37.827598  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:18:37.844717  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:18:37.862432  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:18:37.880374  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:18:37.897106  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:18:37.914572  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:18:37.930931  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:18:37.947973  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:18:37.965343  234278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:18:37.982517  234278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:18:37.995160  234278 ssh_runner.go:195] Run: openssl version
	I0108 21:18:38.000071  234278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:18:38.007311  234278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:18:38.010153  234278 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:18:38.010199  234278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:18:38.014969  234278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:18:38.021864  234278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:18:38.028881  234278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:38.031748  234278 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:38.031793  234278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:18:38.036464  234278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:18:38.043407  234278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:18:38.050267  234278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:18:38.053300  234278 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:18:38.053352  234278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:18:38.058034  234278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:18:38.065225  234278 kubeadm.go:396] StartCluster: {Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:18:38.065295  234278 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:18:38.065326  234278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:18:38.088891  234278 cri.go:87] found id: ""
	I0108 21:18:38.088944  234278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:18:38.095762  234278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:18:38.102429  234278 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:18:38.102472  234278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:18:38.109525  234278 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:18:38.109569  234278 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:18:38.154871  234278 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:18:38.154944  234278 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:18:38.184503  234278 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:18:38.184603  234278 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:18:38.184665  234278 kubeadm.go:317] OS: Linux
	I0108 21:18:38.184738  234278 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:18:38.184802  234278 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:18:38.184859  234278 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:18:38.184922  234278 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:18:38.184984  234278 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:18:38.185035  234278 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:18:38.254537  234278 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:18:38.254694  234278 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:18:38.254812  234278 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:18:38.384624  234278 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:18:38.386494  234278 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:18:38.393127  234278 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:18:38.465227  234278 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:18:38.468703  234278 out.go:204]   - Generating certificates and keys ...
	I0108 21:18:38.468862  234278 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:18:38.468971  234278 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:18:38.701601  234278 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:18:38.785605  234278 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:18:39.053022  234278 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:18:39.208310  234278 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 21:18:39.493092  234278 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 21:18:39.493288  234278 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-211828 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 21:18:39.683698  234278 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 21:18:39.683835  234278 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-211828 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 21:18:39.819980  234278 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:18:40.021204  234278 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:18:40.093994  234278 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 21:18:40.094157  234278 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:18:40.221794  234278 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:18:40.648095  234278 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:18:40.730376  234278 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:18:40.946705  234278 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:18:40.947543  234278 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:18:40.949931  234278 out.go:204]   - Booting up control plane ...
	I0108 21:18:40.950055  234278 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:18:40.954424  234278 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:18:40.955241  234278 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:18:40.955993  234278 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:18:40.957914  234278 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:18:49.460496  234278 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502459 seconds
	I0108 21:18:49.460660  234278 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:18:49.471292  234278 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:18:49.984856  234278 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:18:49.985048  234278 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:18:50.490715  234278 kubeadm.go:317] [bootstrap-token] Using token: 42h4i0.2fsfdy0x2dvm1gra
	I0108 21:18:50.492391  234278 out.go:204]   - Configuring RBAC rules ...
	I0108 21:18:50.492536  234278 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:18:50.496464  234278 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:18:50.499300  234278 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:18:50.502596  234278 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:18:50.504350  234278 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:18:50.545251  234278 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:18:50.905182  234278 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:18:50.906915  234278 kubeadm.go:317] 
	I0108 21:18:50.906992  234278 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:18:50.907021  234278 kubeadm.go:317] 
	I0108 21:18:50.907135  234278 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:18:50.907145  234278 kubeadm.go:317] 
	I0108 21:18:50.907169  234278 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:18:50.907248  234278 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:18:50.907334  234278 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:18:50.907346  234278 kubeadm.go:317] 
	I0108 21:18:50.907417  234278 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:18:50.907558  234278 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:18:50.907659  234278 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:18:50.907670  234278 kubeadm.go:317] 
	I0108 21:18:50.907776  234278 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:18:50.907883  234278 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:18:50.907894  234278 kubeadm.go:317] 
	I0108 21:18:50.907997  234278 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 42h4i0.2fsfdy0x2dvm1gra \
	I0108 21:18:50.908151  234278 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:18:50.908187  234278 kubeadm.go:317]     --control-plane 	  
	I0108 21:18:50.908194  234278 kubeadm.go:317] 
	I0108 21:18:50.908299  234278 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:18:50.908310  234278 kubeadm.go:317] 
	I0108 21:18:50.908423  234278 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 42h4i0.2fsfdy0x2dvm1gra \
	I0108 21:18:50.908593  234278 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:18:50.910724  234278 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:18:50.910831  234278 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:18:50.910858  234278 cni.go:95] Creating CNI manager for ""
	I0108 21:18:50.910866  234278 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:18:50.912908  234278 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:18:50.914397  234278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:18:50.918271  234278 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:18:50.918287  234278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:18:50.932736  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:18:51.276220  234278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:18:51.276342  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:51.276354  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_18_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:51.355407  234278 ops.go:34] apiserver oom_adj: -16
	I0108 21:18:51.355446  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:51.974722  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:52.474819  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:52.974993  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:53.474910  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:53.974201  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:54.474470  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:54.974458  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:55.474688  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:55.974170  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:56.474342  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:56.974327  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:57.474854  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:57.974417  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:58.474513  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:58.974279  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:59.475060  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:18:59.974536  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:00.474955  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:00.974243  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:01.474719  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:01.974870  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:02.474671  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:02.974623  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:03.474190  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:03.974443  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:04.474957  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:04.974262  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:05.474834  234278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:05.554487  234278 kubeadm.go:1067] duration metric: took 14.278196789s to wait for elevateKubeSystemPrivileges.
	I0108 21:19:05.554524  234278 kubeadm.go:398] StartCluster complete in 27.489305408s
	I0108 21:19:05.554545  234278 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:05.554655  234278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:19:05.556072  234278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:06.073443  234278 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:19:06.073498  234278 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:19:06.075105  234278 out.go:177] * Verifying Kubernetes components...
	I0108 21:19:06.073588  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:19:06.073598  234278 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 21:19:06.073731  234278 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:19:06.076741  234278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:19:06.076760  234278 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:19:06.076787  234278 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:19:06.076796  234278 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:19:06.076835  234278 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:19:06.076870  234278 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	I0108 21:19:06.076840  234278 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:19:06.077245  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:19:06.077378  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:19:06.093026  234278 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:19:06.123665  234278 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:06.125474  234278 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:19:06.125494  234278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:19:06.125554  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:19:06.124270  234278 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:19:06.125862  234278 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:19:06.125896  234278 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:19:06.126351  234278 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:19:06.166106  234278 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:19:06.166129  234278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:19:06.166184  234278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:19:06.176246  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:19:06.205809  234278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:19:06.218498  234278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:19:06.329174  234278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:19:06.330687  234278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:19:06.626730  234278 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:19:06.779375  234278 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 21:19:06.780840  234278 addons.go:488] enableAddons completed in 707.242516ms
	I0108 21:19:08.102788  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:10.102828  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:12.603083  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:15.102577  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:17.602001  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:20.102784  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:22.602803  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:25.102821  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:27.602490  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:30.102056  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:32.602845  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:35.102533  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:37.102900  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:39.602261  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:41.602495  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:43.602573  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:46.102717  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:48.602468  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:50.602763  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:53.102559  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:55.602356  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:19:58.102099  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:00.103346  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:02.602182  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:04.602359  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:07.102747  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:09.602330  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:12.102026  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:14.102783  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:16.102974  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:18.602558  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:21.102240  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:23.102735  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:25.102841  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:27.103257  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:29.603300  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:32.102001  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:34.602824  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:37.102537  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:39.602731  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:42.102954  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:44.602813  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:47.101982  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:49.102371  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:51.602964  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:54.102773  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:56.102943  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:20:58.602400  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:01.102121  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:03.102695  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:05.102742  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:07.602250  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:09.602390  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:12.102789  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:14.602345  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:17.102934  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:19.602327  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:21.602862  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:24.102717  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:26.602573  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:29.102196  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.102881  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:33.602189  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.602598  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:38.102223  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:40.602552  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:43.103011  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:45.603311  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:48.102009  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:50.102594  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:52.601892  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:54.602771  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:57.101897  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:59.101923  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:01.101962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:03.102909  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:05.602550  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:07.602641  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:10.102194  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:12.102500  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:14.102962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:16.602092  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:19.102762  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:21.602905  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:24.102645  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:26.602186  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:28.602252  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:31.102800  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:33.602137  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:36.101829  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:38.102615  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:40.102951  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:42.104260  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:44.602836  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:47.102043  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:49.102979  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:51.601953  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:53.602267  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.602823  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:58.101977  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:00.102056  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:02.602631  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:05.102809  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:06.104150  234278 node_ready.go:38] duration metric: took 4m0.01108953s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:23:06.106363  234278 out.go:177] 
	W0108 21:23:06.107917  234278 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:06.107934  234278 out.go:239] * 
	* 
	W0108 21:23:06.108813  234278 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:06.110698  234278 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-211828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-211828
helpers_test.go:235: (dbg) docker inspect old-k8s-version-211828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9",
	        "Created": "2023-01-08T21:18:34.933200191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:18:35.293925019Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hosts",
	        "LogPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9-json.log",
	        "Name": "/old-k8s-version-211828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-211828:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-211828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-211828",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-211828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-211828",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd7a2d331da5df8a5ad26b1a11ef8071062a8308e1e900de389b1fcbf053e8d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd7a2d331da5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-211828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f66150df9bfb",
	                        "old-k8s-version-211828"
	                    ],
	                    "NetworkID": "e48a739a7de53b0a2a21ddeaf3e573efe5bbf8c41c6a15cbe1e7c39d0f359d82",
	                    "EndpointID": "b0b05a18f751ba3ee859f73690ebd1a61bca7d47388946fae5701f1b0d051310",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25
E0108 21:23:06.985244   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-210943                                   | pause-210943                 | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p cilium-210619 --memory=2048                    | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:12 UTC |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-210725                         | cert-expiration-210725       | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p calico-210619 --memory=2048                    | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-210619 pgrep -a                        | kindnet-210619               | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| delete  | -p kindnet-210619                                 | kindnet-210619               | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | --memory=2048                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p cilium-210619 pgrep -a                         | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | pgrep -a kubelet                                  |                              |         |         |                     |                     |
	| delete  | -p cilium-210619                                  | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p bridge-210619 --memory=2048                    | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:13 UTC |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p bridge-210619 pgrep -a                         | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:13 UTC | 08 Jan 23 21:13 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-210902                      | kubernetes-upgrade-210902    | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC | 08 Jan 23 21:18 UTC |
	| start   | -p old-k8s-version-211828                         | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC | 08 Jan 23 21:18 UTC |
	| start   | -p no-preload-211859                              | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| delete  | -p bridge-210619                                  | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| delete  | -p calico-210619                                  | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| start   | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:20 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950       | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950            | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:21:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:21:05.802454  252113 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:21:05.802654  252113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:21:05.802662  252113 out.go:309] Setting ErrFile to fd 2...
	I0108 21:21:05.802669  252113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:21:05.802789  252113 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:21:05.803305  252113 out.go:303] Setting JSON to false
	I0108 21:21:05.804864  252113 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3815,"bootTime":1673209051,"procs":557,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:21:05.804927  252113 start.go:135] virtualization: kvm guest
	I0108 21:21:05.807547  252113 out.go:177] * [embed-certs-211950] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:21:05.809328  252113 notify.go:220] Checking for updates...
	I0108 21:21:05.809354  252113 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:21:05.811105  252113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:21:05.812689  252113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:21:05.814326  252113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:21:05.815772  252113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:21:05.817523  252113 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:21:05.817884  252113 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:21:05.850227  252113 docker.go:137] docker version: linux-20.10.22
	I0108 21:21:05.850311  252113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:21:05.950357  252113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:21:05.870996193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:21:05.950457  252113 docker.go:254] overlay module found
	I0108 21:21:05.952625  252113 out.go:177] * Using the docker driver based on existing profile
	I0108 21:21:05.953952  252113 start.go:294] selected driver: docker
	I0108 21:21:05.953965  252113 start.go:838] validating driver "docker" against &{Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:05.954060  252113 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:21:05.954880  252113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:21:06.055295  252113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:21:05.976172276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:21:06.055595  252113 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:21:06.055620  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:06.055628  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:06.055645  252113 start_flags.go:317] config:
	{Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:06.057844  252113 out.go:177] * Starting control plane node embed-certs-211950 in cluster embed-certs-211950
	I0108 21:21:06.059242  252113 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:21:06.060605  252113 out.go:177] * Pulling base image ...
	I0108 21:21:06.061894  252113 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:21:06.061922  252113 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:21:06.061940  252113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:21:06.061952  252113 cache.go:57] Caching tarball of preloaded images
	I0108 21:21:06.062182  252113 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:21:06.062204  252113 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:21:06.062345  252113 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/config.json ...
	I0108 21:21:06.088100  252113 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:21:06.088123  252113 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:21:06.088154  252113 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:21:06.088191  252113 start.go:364] acquiring machines lock for embed-certs-211950: {Name:mk0bdd56e7ab57c1368c3e82ee515d1652a3526b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:21:06.088291  252113 start.go:368] acquired machines lock for "embed-certs-211950" in 77.123µs
	I0108 21:21:06.088316  252113 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:21:06.088321  252113 fix.go:55] fixHost starting: 
	I0108 21:21:06.088519  252113 cli_runner.go:164] Run: docker container inspect embed-certs-211950 --format={{.State.Status}}
	I0108 21:21:06.113908  252113 fix.go:103] recreateIfNeeded on embed-certs-211950: state=Stopped err=<nil>
	W0108 21:21:06.113938  252113 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:21:06.116212  252113 out.go:177] * Restarting existing docker container for "embed-certs-211950" ...
	I0108 21:21:04.447959  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:06.449102  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:05.102742  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:07.602250  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:06.312183  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:08.810045  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:06.117745  252113 cli_runner.go:164] Run: docker start embed-certs-211950
	I0108 21:21:06.475305  252113 cli_runner.go:164] Run: docker container inspect embed-certs-211950 --format={{.State.Status}}
	I0108 21:21:06.503725  252113 kic.go:415] container "embed-certs-211950" state is running.
	I0108 21:21:06.504129  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:06.530036  252113 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/config.json ...
	I0108 21:21:06.530275  252113 machine.go:88] provisioning docker machine ...
	I0108 21:21:06.530298  252113 ubuntu.go:169] provisioning hostname "embed-certs-211950"
	I0108 21:21:06.530340  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:06.559072  252113 main.go:134] libmachine: Using SSH client type: native
	I0108 21:21:06.559258  252113 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33032 <nil> <nil>}
	I0108 21:21:06.559273  252113 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-211950 && echo "embed-certs-211950" | sudo tee /etc/hostname
	I0108 21:21:06.559914  252113 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56878->127.0.0.1:33032: read: connection reset by peer
	I0108 21:21:09.684380  252113 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-211950
	
	I0108 21:21:09.684467  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:09.708681  252113 main.go:134] libmachine: Using SSH client type: native
	I0108 21:21:09.708844  252113 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33032 <nil> <nil>}
	I0108 21:21:09.708871  252113 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-211950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-211950/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-211950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:21:09.827124  252113 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:21:09.827161  252113 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:21:09.827192  252113 ubuntu.go:177] setting up certificates
	I0108 21:21:09.827204  252113 provision.go:83] configureAuth start
	I0108 21:21:09.827263  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:09.852817  252113 provision.go:138] copyHostCerts
	I0108 21:21:09.852880  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:21:09.852893  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:21:09.852963  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:21:09.853060  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:21:09.853069  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:21:09.853093  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:21:09.853148  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:21:09.853158  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:21:09.853182  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:21:09.853235  252113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.embed-certs-211950 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-211950]
	I0108 21:21:09.920653  252113 provision.go:172] copyRemoteCerts
	I0108 21:21:09.920714  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:21:09.920750  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:09.947707  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.030903  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:21:10.048184  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:21:10.065587  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:21:10.083096  252113 provision.go:86] duration metric: configureAuth took 255.875528ms
	I0108 21:21:10.083135  252113 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:21:10.083333  252113 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:21:10.083347  252113 machine.go:91] provisioned docker machine in 3.553058016s
	I0108 21:21:10.083354  252113 start.go:300] post-start starting for "embed-certs-211950" (driver="docker")
	I0108 21:21:10.083362  252113 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:21:10.083415  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:21:10.083452  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.109702  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.195016  252113 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:21:10.197818  252113 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:21:10.197840  252113 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:21:10.197851  252113 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:21:10.197857  252113 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:21:10.197865  252113 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:21:10.197912  252113 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:21:10.197977  252113 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:21:10.198052  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:21:10.204746  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:21:10.222465  252113 start.go:303] post-start completed in 139.096583ms
	I0108 21:21:10.222528  252113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:21:10.222583  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.248489  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.332052  252113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:21:10.335995  252113 fix.go:57] fixHost completed within 4.247669326s
	I0108 21:21:10.336018  252113 start.go:83] releasing machines lock for "embed-certs-211950", held for 4.247709743s
	I0108 21:21:10.336091  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:10.362577  252113 ssh_runner.go:195] Run: cat /version.json
	I0108 21:21:10.362643  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.362655  252113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:21:10.362722  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.389523  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.390135  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.474910  252113 ssh_runner.go:195] Run: systemctl --version
	I0108 21:21:10.503672  252113 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:21:10.515405  252113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:21:10.525294  252113 docker.go:189] disabling docker service ...
	I0108 21:21:10.525338  252113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:21:10.535021  252113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:21:10.543823  252113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:21:10.626580  252113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:21:10.703580  252113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:21:10.712815  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:21:10.725307  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:21:10.733204  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:21:10.742003  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:21:10.749989  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:21:10.757996  252113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:21:10.764350  252113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:21:10.770752  252113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:21:08.948177  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:10.948447  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:09.602390  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:12.102789  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:10.810085  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:13.309701  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:10.843690  252113 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:21:10.910489  252113 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:21:10.910563  252113 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:21:10.914322  252113 start.go:472] Will wait 60s for crictl version
	I0108 21:21:10.914382  252113 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:21:10.939459  252113 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:21:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:21:13.448462  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:15.448772  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:17.948651  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:14.602345  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:17.102934  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:15.810341  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:17.811099  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:21.986836  252113 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:21:22.009302  252113 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:21:22.009370  252113 ssh_runner.go:195] Run: containerd --version
	I0108 21:21:22.032318  252113 ssh_runner.go:195] Run: containerd --version
	I0108 21:21:22.057723  252113 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:21:20.448682  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:22.948378  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:19.602327  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:21.602862  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:20.309592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.309823  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.059129  252113 cli_runner.go:164] Run: docker network inspect embed-certs-211950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:21:22.082721  252113 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0108 21:21:22.086120  252113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:21:22.095607  252113 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:21:22.095676  252113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:21:22.119292  252113 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:21:22.119314  252113 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:21:22.119353  252113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:21:22.144549  252113 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:21:22.144574  252113 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:21:22.144617  252113 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:21:22.169507  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:22.169531  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:22.169546  252113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:21:22.169563  252113 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-211950 NodeName:embed-certs-211950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:21:22.169743  252113 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-211950"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:21:22.169858  252113 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-211950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:21:22.169918  252113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:21:22.177488  252113 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:21:22.177552  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:21:22.184516  252113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
	I0108 21:21:22.197565  252113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:21:22.210079  252113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0108 21:21:22.222327  252113 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:21:22.225285  252113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:21:22.234190  252113 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950 for IP: 192.168.94.2
	I0108 21:21:22.234285  252113 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:21:22.234322  252113 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:21:22.234389  252113 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/client.key
	I0108 21:21:22.234443  252113 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.key.ad8e880a
	I0108 21:21:22.234517  252113 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.key
	I0108 21:21:22.234619  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:21:22.234647  252113 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:21:22.234656  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:21:22.234690  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:21:22.234715  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:21:22.234739  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:21:22.234776  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:21:22.235406  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:21:22.252804  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:21:22.269489  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:21:22.286176  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:21:22.302881  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:21:22.319924  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:21:22.336527  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:21:22.353096  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:21:22.369684  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:21:22.386382  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:21:22.403589  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:21:22.422540  252113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:21:22.434954  252113 ssh_runner.go:195] Run: openssl version
	I0108 21:21:22.439875  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:21:22.447293  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.450515  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.450562  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.455232  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:21:22.461900  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:21:22.469022  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.471993  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.472043  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.476628  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:21:22.483089  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:21:22.490167  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.493388  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.493425  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.498191  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:21:22.505075  252113 kubeadm.go:396] StartCluster: {Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:22.505169  252113 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:21:22.505219  252113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:21:22.530247  252113 cri.go:87] found id: "89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3"
	I0108 21:21:22.530269  252113 cri.go:87] found id: "d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804"
	I0108 21:21:22.530276  252113 cri.go:87] found id: "8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1"
	I0108 21:21:22.530282  252113 cri.go:87] found id: "deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557"
	I0108 21:21:22.530288  252113 cri.go:87] found id: "96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f"
	I0108 21:21:22.530294  252113 cri.go:87] found id: "0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b"
	I0108 21:21:22.530300  252113 cri.go:87] found id: "661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64"
	I0108 21:21:22.530305  252113 cri.go:87] found id: "a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a"
	I0108 21:21:22.530311  252113 cri.go:87] found id: ""
	I0108 21:21:22.530349  252113 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:21:22.542527  252113 cri.go:114] JSON = null
	W0108 21:21:22.542587  252113 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0108 21:21:22.542631  252113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:21:22.550243  252113 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:21:22.550264  252113 kubeadm.go:627] restartCluster start
	I0108 21:21:22.550299  252113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:21:22.557319  252113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.558314  252113 kubeconfig.go:135] verify returned: extract IP: "embed-certs-211950" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:21:22.558783  252113 kubeconfig.go:146] "embed-certs-211950" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:21:22.559413  252113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:21:22.560901  252113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:21:22.567580  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.567625  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.575328  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.775525  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.775607  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.784331  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.975569  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.975661  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.984395  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.175545  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.175618  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.184269  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.375514  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.375606  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.384151  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.576476  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.576564  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.585154  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.776477  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.776559  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.785115  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.976398  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.976477  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.985629  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.175955  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.176027  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.185012  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.376357  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.376419  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.385370  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.575561  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.575652  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.584295  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.775523  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.775587  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.783989  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.976277  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.976357  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.984953  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.176244  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.176331  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.184911  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.376222  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.376301  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.385465  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.575785  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.575879  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.584484  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.584506  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.584548  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.592781  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.592805  252113 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:21:25.592811  252113 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:21:25.592822  252113 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:21:25.592860  252113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:21:25.618121  252113 cri.go:87] found id: "89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3"
	I0108 21:21:25.618143  252113 cri.go:87] found id: "d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804"
	I0108 21:21:25.618150  252113 cri.go:87] found id: "8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1"
	I0108 21:21:25.618156  252113 cri.go:87] found id: "deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557"
	I0108 21:21:25.618162  252113 cri.go:87] found id: "96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f"
	I0108 21:21:25.618168  252113 cri.go:87] found id: "0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b"
	I0108 21:21:25.618174  252113 cri.go:87] found id: "661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64"
	I0108 21:21:25.618180  252113 cri.go:87] found id: "a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a"
	I0108 21:21:25.618186  252113 cri.go:87] found id: ""
	I0108 21:21:25.618194  252113 cri.go:232] Stopping containers: [89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3 d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804 8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1 deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557 96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f 0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b 661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64 a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a]
	I0108 21:21:25.618232  252113 ssh_runner.go:195] Run: which crictl
	I0108 21:21:25.621048  252113 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3 d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804 8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1 deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557 96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f 0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b 661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64 a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a
	I0108 21:21:25.647817  252113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:21:25.657541  252113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:21:25.664561  252113 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:21:25.664619  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:21:25.671011  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:21:25.677375  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:21:25.683797  252113 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.683846  252113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:21:25.689922  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:21:25.696159  252113 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.696204  252113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:21:25.702527  252113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:21:25.708916  252113 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:21:25.708938  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:25.752274  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:25.448237  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:27.948001  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:24.102717  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:26.602573  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:24.809637  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.810185  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:28.810565  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.771186  252113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018881983s)
	I0108 21:21:26.771221  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:26.910605  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:26.962648  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:27.049416  252113 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:21:27.049533  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:27.614351  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:28.113890  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:28.125589  252113 api_server.go:71] duration metric: took 1.076175741s to wait for apiserver process to appear ...
	I0108 21:21:28.125678  252113 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:21:28.125706  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:28.126079  252113 api_server.go:268] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0108 21:21:28.626473  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:29.948452  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.948574  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.619403  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0108 21:21:31.619437  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0108 21:21:31.626775  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:31.712269  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:31.712321  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:32.126802  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:32.131550  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:32.131592  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:32.627202  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:32.632820  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:32.632854  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:33.126355  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:33.132259  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:21:33.140648  252113 api_server.go:140] control plane version: v1.25.3
	I0108 21:21:33.140683  252113 api_server.go:130] duration metric: took 5.014986172s to wait for apiserver health ...
	I0108 21:21:33.140697  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:33.140707  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:33.143250  252113 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:21:29.102196  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.102881  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.310002  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.809947  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.145039  252113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:21:33.149495  252113 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:21:33.149517  252113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:21:33.165823  252113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:21:34.423055  252113 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.257190006s)
	I0108 21:21:34.423131  252113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:21:34.431806  252113 system_pods.go:59] 9 kube-system pods found
	I0108 21:21:34.431843  252113 system_pods.go:61] "coredns-565d847f94-phg9v" [2a976fdd-21b3-4dee-a33c-ccd2c57d8be9] Running
	I0108 21:21:34.431856  252113 system_pods.go:61] "etcd-embed-certs-211950" [4971d596-11e2-4364-a509-52a06bf77e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:21:34.431864  252113 system_pods.go:61] "kindnet-26wwc" [02f0fed5-e625-4740-aa5e-d77817ca124b] Running
	I0108 21:21:34.431884  252113 system_pods.go:61] "kube-apiserver-embed-certs-211950" [ba0d2dbe-2dbb-4a40-b2cc-da82f163d7f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:21:34.431900  252113 system_pods.go:61] "kube-controller-manager-embed-certs-211950" [0877b5ea-137d-4d80-a5d2-fd95544ba3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:21:34.431916  252113 system_pods.go:61] "kube-proxy-ggxgh" [1bd15143-26d2-4a26-a52e-362676c5397b] Running
	I0108 21:21:34.431928  252113 system_pods.go:61] "kube-scheduler-embed-certs-211950" [41b9dede-c0fb-4644-8fa3-51d3eccd950b] Running
	I0108 21:21:34.431942  252113 system_pods.go:61] "metrics-server-5c8fd5cf8-szzjr" [488ef49e-82e4-443b-8f03-3726c44719af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:21:34.431954  252113 system_pods.go:61] "storage-provisioner" [024b335d-c262-457c-8773-924e20b66407] Running
	I0108 21:21:34.431962  252113 system_pods.go:74] duration metric: took 8.820242ms to wait for pod list to return data ...
	I0108 21:21:34.431976  252113 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:21:34.436028  252113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:21:34.436084  252113 node_conditions.go:123] node cpu capacity is 8
	I0108 21:21:34.436102  252113 node_conditions.go:105] duration metric: took 4.121302ms to run NodePressure ...
	I0108 21:21:34.436123  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:34.580079  252113 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:21:34.583873  252113 kubeadm.go:778] kubelet initialised
	I0108 21:21:34.583892  252113 kubeadm.go:779] duration metric: took 3.792429ms waiting for restarted kubelet to initialise ...
	I0108 21:21:34.583900  252113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:21:34.589069  252113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-phg9v" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.593053  252113 pod_ready.go:92] pod "coredns-565d847f94-phg9v" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:34.593070  252113 pod_ready.go:81] duration metric: took 3.977273ms waiting for pod "coredns-565d847f94-phg9v" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.593079  252113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.448121  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:36.947638  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:33.602189  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.602598  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:38.102223  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.811248  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:38.309824  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:36.603328  252113 pod_ready.go:102] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:39.102636  252113 pod_ready.go:102] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:39.448188  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:41.448816  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:40.602552  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:43.103011  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:40.310117  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:42.310275  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:41.102721  252113 pod_ready.go:92] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:41.102749  252113 pod_ready.go:81] duration metric: took 6.509663521s waiting for pod "etcd-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.102765  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.107086  252113 pod_ready.go:92] pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:41.107102  252113 pod_ready.go:81] duration metric: took 4.330679ms waiting for pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.107110  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:43.117124  252113 pod_ready.go:102] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:45.616162  252113 pod_ready.go:102] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:43.947639  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.948111  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:47.948466  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.603311  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:48.102009  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:44.809516  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.809649  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:48.810315  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.116423  252113 pod_ready.go:92] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:46.116450  252113 pod_ready.go:81] duration metric: took 5.00933349s waiting for pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.116461  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggxgh" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.120605  252113 pod_ready.go:92] pod "kube-proxy-ggxgh" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:46.120624  252113 pod_ready.go:81] duration metric: took 4.157414ms waiting for pod "kube-proxy-ggxgh" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.120633  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:47.630047  252113 pod_ready.go:92] pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:47.630074  252113 pod_ready.go:81] duration metric: took 1.509435424s waiting for pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:47.630084  252113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:49.639550  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:50.447460  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:52.448611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:50.102594  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:52.601892  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:51.309943  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:53.809699  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:52.139845  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:54.639170  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:54.947665  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:56.947700  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:54.602771  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:57.101897  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:56.310210  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:58.809748  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:57.141151  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:59.639435  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:58.949756  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:01.448451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:59.101923  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:01.101962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:03.102909  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:00.810593  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:03.310194  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:02.139593  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:04.639219  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:03.947604  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.948211  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.602550  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:07.602641  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:05.809750  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:07.809939  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:06.639683  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:09.139384  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:08.447451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.448497  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:12.948218  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.102194  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:12.102500  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:10.309596  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:12.309705  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:11.140038  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:13.639053  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:15.639818  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:15.449977  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:17.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:14.102962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:16.602092  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:14.310432  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:16.810413  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:18.139157  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:20.139713  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:19.947707  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:21.948479  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:19.102762  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:21.602905  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:19.309811  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:21.309972  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:23.810232  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:22.140404  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:24.142445  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:24.447621  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:26.448004  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:24.102645  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:26.602186  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:25.810410  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:28.310174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:26.639220  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:28.640090  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:28.947732  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:31.448269  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:28.602252  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:31.102800  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:30.310481  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:32.311174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:31.139684  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:33.140111  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:35.639008  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:33.948439  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:36.448349  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:33.602137  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:36.101829  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:38.102615  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:34.810711  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.310466  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.639384  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:39.639644  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:38.948577  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:41.447813  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:40.102951  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:42.104260  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:39.810530  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.309510  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.141404  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:44.639565  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:43.448406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:45.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:47.948164  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:44.602836  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:47.102043  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:44.310568  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.809625  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:48.809973  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.640262  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:49.139383  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:50.448245  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:52.948450  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:49.102979  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:51.601953  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:50.810329  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:53.310062  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:51.639284  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:54.139362  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:55.447735  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:57.448306  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:53.602267  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.602823  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:58.101977  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.810600  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:58.310283  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:56.139671  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:58.639895  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:59.448595  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:01.448628  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:00.102056  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:02.602631  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:00.310562  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:02.810458  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:01.139847  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:03.140497  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:05.639659  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:05.102809  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:06.104150  234278 node_ready.go:38] duration metric: took 4m0.01108953s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:23:06.106363  234278 out.go:177] 
	W0108 21:23:06.107917  234278 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:06.107934  234278 out.go:239] * 
	W0108 21:23:06.108813  234278 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:06.110698  234278 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a3a1060e13467       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   156951a7e6ad9
	574f15edf8331       d6e3e26021b60       3 minutes ago        Exited              kindnet-cni               0                   156951a7e6ad9
	4fdaee2b10f29       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   700fdf969a65f
	a9e20d8377a66       b2756210eeabf       4 minutes ago        Running             etcd                      0                   3177f12cbcc92
	3baeebbc6da60       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   0c6dba6ffda90
	dc587e05c9875       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   6963fcc252763
	18030e6256a0f       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   40f53ffcd3927
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:18:35 UTC, end at Sun 2023-01-08 21:23:07 UTC. --
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.017362062Z" level=info msg="CreateContainer within sandbox \"700fdf969a65fcbdc6f930b454bd581db776120d083de6ae659287dab0c67fb6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.034744078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-9z2n8,Uid:ec80e506-5c07-426a-96b5-39a19c3616de,Namespace:kube-system,Attempt:0,} returns sandbox id \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\""
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.035304776Z" level=info msg="CreateContainer within sandbox \"700fdf969a65fcbdc6f930b454bd581db776120d083de6ae659287dab0c67fb6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25\""
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.035938964Z" level=info msg="StartContainer for \"4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25\""
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.036835187Z" level=info msg="PullImage \"kindest/kindnetd:v20221004-44d545d1\""
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.039080792Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.154005100Z" level=info msg="StartContainer for \"4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25\" returns successfully"
	Jan 08 21:19:06 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:06.846546160Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 21:19:08 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:08.990692368Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd:v20221004-44d545d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:08 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:08.993044098Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:08 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:08.995173002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kindest/kindnetd:v20221004-44d545d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:08 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:08.997053950Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:08 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:08.997446958Z" level=info msg="PullImage \"kindest/kindnetd:v20221004-44d545d1\" returns image reference \"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f\""
	Jan 08 21:19:08 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:08.999771407Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jan 08 21:19:09 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:09.015702320Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\""
	Jan 08 21:19:09 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:09.016353644Z" level=info msg="StartContainer for \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\""
	Jan 08 21:19:09 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:19:09.137636684Z" level=info msg="StartContainer for \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\" returns successfully"
	Jan 08 21:21:49 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:49.762052672Z" level=info msg="shim disconnected" id=574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a
	Jan 08 21:21:49 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:49.762113671Z" level=warning msg="cleaning up after shim disconnected" id=574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a namespace=k8s.io
	Jan 08 21:21:49 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:49.762134360Z" level=info msg="cleaning up dead shim"
	Jan 08 21:21:49 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:49.770880942Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:21:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2494 runtime=io.containerd.runc.v2\n"
	Jan 08 21:21:50 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:50.558910055Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jan 08 21:21:50 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:50.574094071Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\""
	Jan 08 21:21:50 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:50.574621387Z" level=info msg="StartContainer for \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\""
	Jan 08 21:21:50 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:21:50.813871823Z" level=info msg="StartContainer for \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-211828
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-211828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=old-k8s-version-211828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_18_51_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:18:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:22:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:22:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:22:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:22:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-211828
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	System Info:
	 Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	 System UUID:                a9413ae7-d165-4b76-a22b-73b89e3e2d6a
	 Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	 Kernel Version:             5.15.0-1025-gcp
	 OS Image:                   Ubuntu 20.04.5 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-211828                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                kindnet-9z2n8                                     100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-211828             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                kube-controller-manager-old-k8s-version-211828    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                kube-proxy-jqh6r                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-211828             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-211828  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70] <==
	* 2023-01-08 21:18:42.222614 I | raft: ea7e25599daad906 became follower at term 0
	2023-01-08 21:18:42.222680 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-01-08 21:18:42.222758 I | raft: ea7e25599daad906 became follower at term 1
	2023-01-08 21:18:42.230174 W | auth: simple token is not cryptographically signed
	2023-01-08 21:18:42.233023 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-01-08 21:18:42.234706 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-01-08 21:18:42.234820 I | embed: listening for metrics on http://192.168.76.2:2381
	2023-01-08 21:18:42.235027 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-01-08 21:18:42.235302 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-01-08 21:18:42.236050 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:18:42.823124 I | raft: ea7e25599daad906 is starting a new election at term 1
	2023-01-08 21:18:42.823163 I | raft: ea7e25599daad906 became candidate at term 2
	2023-01-08 21:18:42.823187 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2023-01-08 21:18:42.823199 I | raft: ea7e25599daad906 became leader at term 2
	2023-01-08 21:18:42.823207 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2023-01-08 21:18:42.823565 I | etcdserver: setting up the initial cluster version to 3.3
	2023-01-08 21:18:42.823593 I | etcdserver: published {Name:old-k8s-version-211828 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:18:42.823608 I | embed: ready to serve client requests
	2023-01-08 21:18:42.823618 I | embed: ready to serve client requests
	2023-01-08 21:18:42.824159 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-01-08 21:18:42.824619 I | etcdserver/api: enabled capabilities for version 3.3
	2023-01-08 21:18:42.825884 I | embed: serving client requests on 192.168.76.2:2379
	2023-01-08 21:18:42.826236 I | embed: serving client requests on 127.0.0.1:2379
	2023-01-08 21:19:59.361725 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (117.600264ms) to execute
	2023-01-08 21:20:00.632412 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (349.431998ms) to execute
	
	* 
	* ==> kernel <==
	*  21:23:07 up  1:05,  0 users,  load average: 0.64, 1.25, 1.70
	Linux old-k8s-version-211828 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4] <==
	* I0108 21:18:45.832401       1 naming_controller.go:288] Starting NamingConditionController
	I0108 21:18:45.832488       1 establishing_controller.go:73] Starting EstablishingController
	I0108 21:18:45.832179       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0108 21:18:45.833021       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 21:18:45.931842       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:18:45.932088       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:18:45.932770       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0108 21:18:45.932802       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:18:46.831581       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0108 21:18:46.831614       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 21:18:46.831759       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:18:46.835235       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0108 21:18:46.838488       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:18:46.838509       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0108 21:18:47.618962       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:18:48.612611       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:18:48.892810       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 21:18:49.229017       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0108 21:18:49.229650       1 controller.go:606] quota admission added evaluator for: endpoints
	I0108 21:18:50.129578       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0108 21:18:50.537710       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0108 21:18:50.897501       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:05.581835       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:05.598947       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0108 21:19:05.761910       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d] <==
	* I0108 21:19:05.532786       1 shared_informer.go:204] Caches are synced for HPA 
	I0108 21:19:05.577838       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0108 21:19:05.583510       1 shared_informer.go:204] Caches are synced for stateful set 
	I0108 21:19:05.589186       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a0f32b10-75af-4660-85eb-9e2d60222d15", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-9z2n8
	I0108 21:19:05.591195       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"5562d924-d3c2-495e-8160-7930ac4bed98", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-jqh6r
	E0108 21:19:05.603944       1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"a0f32b10-75af-4660-85eb-9e2d60222d15", ResourceVersion:"226", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808809531, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20221004-44d545d1\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerati
ons\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001002e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:
[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.Vsphere
VirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ec0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolume
Source)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ee0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)
(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.
Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20221004-44d545d1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002f00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002f40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011562d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.Eph
emeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0007111e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011652c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.Resou
rceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00013c870)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000711260)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0108 21:19:05.611711       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"5562d924-d3c2-495e-8160-7930ac4bed98", ResourceVersion:"214", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808809530, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001002da0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a2a980), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002de0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002e20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001156140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000710ed8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001165260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00013c868)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000710f18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0108 21:19:05.726906       1 shared_informer.go:204] Caches are synced for disruption 
	I0108 21:19:05.726931       1 disruption.go:341] Sending events to api server.
	I0108 21:19:05.734015       1 shared_informer.go:204] Caches are synced for resource quota 
	I0108 21:19:05.759951       1 shared_informer.go:204] Caches are synced for deployment 
	I0108 21:19:05.764185       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2341f665-8f22-48e3-9b76-dbd488b1235d", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0108 21:19:05.775680       1 shared_informer.go:204] Caches are synced for resource quota 
	I0108 21:19:05.783488       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0108 21:19:05.787135       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"993e0e3b-0673-4494-853e-0ee4024d61de", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-lm49s
	I0108 21:19:05.788779       1 shared_informer.go:204] Caches are synced for expand 
	I0108 21:19:05.788903       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0108 21:19:05.788923       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:05.803010       1 shared_informer.go:204] Caches are synced for certificate 
	I0108 21:19:05.807809       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0108 21:19:05.833561       1 shared_informer.go:204] Caches are synced for certificate 
	I0108 21:19:05.834019       1 shared_informer.go:204] Caches are synced for attach detach 
	I0108 21:19:05.835463       1 shared_informer.go:204] Caches are synced for PV protection 
	I0108 21:19:05.839437       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0108 21:19:05.850779       1 log.go:172] [INFO] signed certificate with serial number 477651019640136324065142830251145268032180874070
	
	* 
	* ==> kube-proxy [4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25] <==
	* W0108 21:19:06.244257       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 21:19:06.253650       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0108 21:19:06.253708       1 server_others.go:149] Using iptables Proxier.
	I0108 21:19:06.254406       1 server.go:529] Version: v1.16.0
	I0108 21:19:06.255737       1 config.go:131] Starting endpoints config controller
	I0108 21:19:06.255772       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 21:19:06.255807       1 config.go:313] Starting service config controller
	I0108 21:19:06.255831       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 21:19:06.409897       1 shared_informer.go:204] Caches are synced for service config 
	I0108 21:19:06.409933       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330] <==
	* I0108 21:18:45.921105       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0108 21:18:45.921798       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0108 21:18:46.015772       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:46.016363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:46.017243       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:46.017333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:46.017720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:46.017731       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:46.017872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:46.018105       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:46.019057       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:46.019300       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:46.020625       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:18:47.017213       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:47.018225       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:47.020195       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:47.020884       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:47.021712       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:47.022941       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:47.023944       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:47.024799       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:47.028700       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:47.029806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:47.031840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:19:06.775194       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:18:35 UTC, end at Sun 2023-01-08 21:23:07 UTC. --
	Jan 08 21:21:06 old-k8s-version-211828 kubelet[926]: E0108 21:21:06.429104     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:11 old-k8s-version-211828 kubelet[926]: E0108 21:21:11.429825     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:16 old-k8s-version-211828 kubelet[926]: E0108 21:21:16.430618     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:21 old-k8s-version-211828 kubelet[926]: E0108 21:21:21.431310     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:26 old-k8s-version-211828 kubelet[926]: E0108 21:21:26.432118     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:31 old-k8s-version-211828 kubelet[926]: E0108 21:21:31.432931     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:36 old-k8s-version-211828 kubelet[926]: E0108 21:21:36.433698     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:41 old-k8s-version-211828 kubelet[926]: E0108 21:21:41.434522     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:46 old-k8s-version-211828 kubelet[926]: E0108 21:21:46.435213     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:51 old-k8s-version-211828 kubelet[926]: E0108 21:21:51.435963     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:21:56 old-k8s-version-211828 kubelet[926]: E0108 21:21:56.436799     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:01 old-k8s-version-211828 kubelet[926]: E0108 21:22:01.437489     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:06 old-k8s-version-211828 kubelet[926]: E0108 21:22:06.438289     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:11 old-k8s-version-211828 kubelet[926]: E0108 21:22:11.439158     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:16 old-k8s-version-211828 kubelet[926]: E0108 21:22:16.439953     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:21 old-k8s-version-211828 kubelet[926]: E0108 21:22:21.440555     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:26 old-k8s-version-211828 kubelet[926]: E0108 21:22:26.441322     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:31 old-k8s-version-211828 kubelet[926]: E0108 21:22:31.442029     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:36 old-k8s-version-211828 kubelet[926]: E0108 21:22:36.442821     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:41 old-k8s-version-211828 kubelet[926]: E0108 21:22:41.443602     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:46 old-k8s-version-211828 kubelet[926]: E0108 21:22:46.444469     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:51 old-k8s-version-211828 kubelet[926]: E0108 21:22:51.445096     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:22:56 old-k8s-version-211828 kubelet[926]: E0108 21:22:56.445878     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:23:01 old-k8s-version-211828 kubelet[926]: E0108 21:23:01.446549     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:23:06 old-k8s-version-211828 kubelet[926]: E0108 21:23:06.447445     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-211828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-lm49s storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-lm49s storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-lm49s storage-provisioner: exit status 1 (61.168421ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-lm49s" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-lm49s storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (279.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (281.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-211859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:19:01.224633   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-211859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: exit status 80 (4m39.751567319s)

                                                
                                                
-- stdout --
	* [no-preload-211859] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node no-preload-211859 in cluster no-preload-211859
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:18:59.143808  238176 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:18:59.143918  238176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:59.143925  238176 out.go:309] Setting ErrFile to fd 2...
	I0108 21:18:59.143931  238176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:18:59.144049  238176 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:18:59.144699  238176 out.go:303] Setting JSON to false
	I0108 21:18:59.145902  238176 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3688,"bootTime":1673209051,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:18:59.145969  238176 start.go:135] virtualization: kvm guest
	I0108 21:18:59.149157  238176 out.go:177] * [no-preload-211859] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:18:59.150849  238176 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:18:59.150768  238176 notify.go:220] Checking for updates...
	I0108 21:18:59.152513  238176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:18:59.154414  238176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:18:59.156172  238176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:18:59.157932  238176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:18:59.159947  238176 config.go:180] Loaded profile config "bridge-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:18:59.160052  238176 config.go:180] Loaded profile config "calico-210619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:18:59.160175  238176 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:18:59.160230  238176 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:18:59.191403  238176 docker.go:137] docker version: linux-20.10.22
	I0108 21:18:59.191534  238176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:59.294679  238176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:18:59.213781572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:59.294779  238176 docker.go:254] overlay module found
	I0108 21:18:59.296922  238176 out.go:177] * Using the docker driver based on user configuration
	I0108 21:18:59.298263  238176 start.go:294] selected driver: docker
	I0108 21:18:59.298275  238176 start.go:838] validating driver "docker" against <nil>
	I0108 21:18:59.298293  238176 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:18:59.299168  238176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:18:59.396258  238176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:18:59.3188474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:18:59.396421  238176 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 21:18:59.396647  238176 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:18:59.398778  238176 out.go:177] * Using Docker driver with root privileges
	I0108 21:18:59.400199  238176 cni.go:95] Creating CNI manager for ""
	I0108 21:18:59.400215  238176 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:18:59.400233  238176 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:18:59.400252  238176 start_flags.go:317] config:
	{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:18:59.402167  238176 out.go:177] * Starting control plane node no-preload-211859 in cluster no-preload-211859
	I0108 21:18:59.403829  238176 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:18:59.405434  238176 out.go:177] * Pulling base image ...
	I0108 21:18:59.406944  238176 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:18:59.406982  238176 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:18:59.407121  238176 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:18:59.407154  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json: {Name:mk908ca3d16a3ba2e7eb01a5b00d635be1cdedaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:18:59.407201  238176 cache.go:107] acquiring lock: {Name:mka4eae081deb9dc030a8e6d208cdbfc375fedd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407223  238176 cache.go:107] acquiring lock: {Name:mk5f6bff7f6f0a24f6225496f42d8e8e28b27999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407270  238176 cache.go:107] acquiring lock: {Name:mk5f9a0ef25a028cc0da95c581faa4f8582f8133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407296  238176 cache.go:107] acquiring lock: {Name:mk1ba37dc36f668cc1aa7c0cabe840314426c4d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407308  238176 cache.go:107] acquiring lock: {Name:mka15fcca44dc28e79d1a5c07b3e2caf71bae5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407301  238176 cache.go:107] acquiring lock: {Name:mk09e8a53a311c6d58c16c85cb6a7a373e3c68b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407291  238176 cache.go:107] acquiring lock: {Name:mk240cd96639812e2ee7ab4caa38c9f49d9f4169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407393  238176 cache.go:107] acquiring lock: {Name:mkcc5294a2af912a919e5a940c540341ff897a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.407467  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:18:59.407510  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0108 21:18:59.407531  238176 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 336.453µs
	I0108 21:18:59.407538  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists
	I0108 21:18:59.407551  238176 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:18:59.407557  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists
	I0108 21:18:59.407565  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0108 21:18:59.407554  238176 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 254.5µs
	I0108 21:18:59.407569  238176 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 221.598µs
	I0108 21:18:59.407585  238176 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0108 21:18:59.407587  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists
	I0108 21:18:59.407590  238176 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded
	I0108 21:18:59.407587  238176 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 281.102µs
	I0108 21:18:59.407586  238176 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 373.539µs
	I0108 21:18:59.407609  238176 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0108 21:18:59.407612  238176 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded
	I0108 21:18:59.407614  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists
	I0108 21:18:59.407610  238176 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 391.383µs
	I0108 21:18:59.407623  238176 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded
	I0108 21:18:59.407632  238176 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 366.327µs
	I0108 21:18:59.407643  238176 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0108 21:18:59.407647  238176 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded
	I0108 21:18:59.407661  238176 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 362.397µs
	I0108 21:18:59.407676  238176 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0108 21:18:59.407691  238176 cache.go:87] Successfully saved all images to host disk.
	I0108 21:18:59.431218  238176 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:18:59.431250  238176 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:18:59.431271  238176 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:18:59.431312  238176 start.go:364] acquiring machines lock for no-preload-211859: {Name:mk421f625ba7c0f468447c7930aeee12b4ccfc5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:18:59.431527  238176 start.go:368] acquired machines lock for "no-preload-211859" in 122.575µs
	I0108 21:18:59.431560  238176 start.go:93] Provisioning new machine with config: &{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:18:59.431660  238176 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:18:59.434339  238176 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 21:18:59.434613  238176 start.go:159] libmachine.API.Create for "no-preload-211859" (driver="docker")
	I0108 21:18:59.434644  238176 client.go:168] LocalClient.Create starting
	I0108 21:18:59.434711  238176 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem
	I0108 21:18:59.434737  238176 main.go:134] libmachine: Decoding PEM data...
	I0108 21:18:59.434757  238176 main.go:134] libmachine: Parsing certificate...
	I0108 21:18:59.434811  238176 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem
	I0108 21:18:59.434836  238176 main.go:134] libmachine: Decoding PEM data...
	I0108 21:18:59.434854  238176 main.go:134] libmachine: Parsing certificate...
	I0108 21:18:59.435184  238176 cli_runner.go:164] Run: docker network inspect no-preload-211859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:18:59.458436  238176 cli_runner.go:211] docker network inspect no-preload-211859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:18:59.458504  238176 network_create.go:272] running [docker network inspect no-preload-211859] to gather additional debugging logs...
	I0108 21:18:59.458529  238176 cli_runner.go:164] Run: docker network inspect no-preload-211859
	W0108 21:18:59.481614  238176 cli_runner.go:211] docker network inspect no-preload-211859 returned with exit code 1
	I0108 21:18:59.481642  238176 network_create.go:275] error running [docker network inspect no-preload-211859]: docker network inspect no-preload-211859: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-211859
	I0108 21:18:59.481651  238176 network_create.go:277] output of [docker network inspect no-preload-211859]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-211859
	
	** /stderr **
	I0108 21:18:59.481690  238176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:18:59.507209  238176 network.go:244] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b55bc2878bca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d4:2d:1f:91}}
	I0108 21:18:59.508251  238176 network.go:244] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6ab3f57c56bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:58:4f:a6:4e}}
	I0108 21:18:59.508898  238176 network.go:244] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-c9c7b4f8f7ef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c7:bc:cf:86}}
	I0108 21:18:59.509574  238176 network.go:244] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-e48a739a7de5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:13:4f:14:d7}}
	I0108 21:18:59.510403  238176 network.go:306] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00037cab0] misses:0}
	I0108 21:18:59.510434  238176 network.go:239] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 21:18:59.510445  238176 network_create.go:115] attempt to create docker network no-preload-211859 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0108 21:18:59.510499  238176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-211859 no-preload-211859
	I0108 21:18:59.578588  238176 network_create.go:99] docker network no-preload-211859 192.168.85.0/24 created
	I0108 21:18:59.578619  238176 kic.go:106] calculated static IP "192.168.85.2" for the "no-preload-211859" container
	I0108 21:18:59.578691  238176 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:18:59.603910  238176 cli_runner.go:164] Run: docker volume create no-preload-211859 --label name.minikube.sigs.k8s.io=no-preload-211859 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:18:59.628539  238176 oci.go:103] Successfully created a docker volume no-preload-211859
	I0108 21:18:59.628631  238176 cli_runner.go:164] Run: docker run --rm --name no-preload-211859-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-211859 --entrypoint /usr/bin/test -v no-preload-211859:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 21:19:00.244490  238176 oci.go:107] Successfully prepared a docker volume no-preload-211859
	I0108 21:19:00.244540  238176 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	W0108 21:19:00.244690  238176 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:19:00.244795  238176 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:19:00.347432  238176 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-211859 --name no-preload-211859 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-211859 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-211859 --network no-preload-211859 --ip 192.168.85.2 --volume no-preload-211859:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 21:19:00.753265  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Running}}
	I0108 21:19:00.781750  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:19:00.807152  238176 cli_runner.go:164] Run: docker exec no-preload-211859 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:19:00.860473  238176 oci.go:144] the created container "no-preload-211859" has a running status.
	I0108 21:19:00.860505  238176 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa...
	I0108 21:19:01.047134  238176 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:19:01.125806  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:19:01.157291  238176 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:19:01.157311  238176 kic_runner.go:114] Args: [docker exec --privileged no-preload-211859 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:19:01.233547  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:19:01.264926  238176 machine.go:88] provisioning docker machine ...
	I0108 21:19:01.264959  238176 ubuntu.go:169] provisioning hostname "no-preload-211859"
	I0108 21:19:01.265025  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:01.292172  238176 main.go:134] libmachine: Using SSH client type: native
	I0108 21:19:01.292360  238176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33017 <nil> <nil>}
	I0108 21:19:01.292378  238176 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-211859 && echo "no-preload-211859" | sudo tee /etc/hostname
	I0108 21:19:01.415983  238176 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-211859
	
	I0108 21:19:01.416069  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:01.443381  238176 main.go:134] libmachine: Using SSH client type: native
	I0108 21:19:01.443601  238176 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33017 <nil> <nil>}
	I0108 21:19:01.443623  238176 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-211859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-211859/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-211859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:19:01.563415  238176 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:19:01.563452  238176 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:19:01.563499  238176 ubuntu.go:177] setting up certificates
	I0108 21:19:01.563511  238176 provision.go:83] configureAuth start
	I0108 21:19:01.563565  238176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:19:01.588929  238176 provision.go:138] copyHostCerts
	I0108 21:19:01.588993  238176 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:19:01.589006  238176 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:19:01.589096  238176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:19:01.589189  238176 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:19:01.589198  238176 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:19:01.589225  238176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:19:01.589275  238176 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:19:01.589282  238176 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:19:01.589304  238176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:19:01.589353  238176 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.no-preload-211859 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-211859]
	I0108 21:19:01.742148  238176 provision.go:172] copyRemoteCerts
	I0108 21:19:01.742212  238176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:19:01.742246  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:01.767076  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:01.855620  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:19:01.873761  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:19:01.891061  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:19:01.908028  238176 provision.go:86] duration metric: configureAuth took 344.501838ms
	I0108 21:19:01.908054  238176 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:19:01.908346  238176 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:19:01.908376  238176 machine.go:91] provisioned docker machine in 643.422887ms
	I0108 21:19:01.908386  238176 client.go:171] LocalClient.Create took 2.473731585s
	I0108 21:19:01.908410  238176 start.go:167] duration metric: libmachine.API.Create for "no-preload-211859" took 2.473794937s
	I0108 21:19:01.908440  238176 start.go:300] post-start starting for "no-preload-211859" (driver="docker")
	I0108 21:19:01.908453  238176 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:19:01.908520  238176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:19:01.908578  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:01.935410  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:02.029923  238176 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:19:02.033355  238176 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:19:02.033381  238176 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:19:02.033396  238176 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:19:02.033402  238176 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:19:02.033410  238176 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:19:02.033465  238176 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:19:02.033531  238176 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:19:02.033607  238176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:19:02.040720  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:19:02.060277  238176 start.go:303] post-start completed in 151.818179ms
	I0108 21:19:02.060724  238176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:19:02.085911  238176 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:19:02.086201  238176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:19:02.086250  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:02.111359  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:02.200765  238176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:19:02.204893  238176 start.go:128] duration metric: createHost completed in 2.77321679s
	I0108 21:19:02.204922  238176 start.go:83] releasing machines lock for "no-preload-211859", held for 2.773372965s
	I0108 21:19:02.205007  238176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:19:02.230935  238176 ssh_runner.go:195] Run: cat /version.json
	I0108 21:19:02.231000  238176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:19:02.231035  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:02.231060  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:02.257725  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:02.257885  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:02.365653  238176 ssh_runner.go:195] Run: systemctl --version
	I0108 21:19:02.369477  238176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:19:02.379428  238176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:19:02.388753  238176 docker.go:189] disabling docker service ...
	I0108 21:19:02.388810  238176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:19:02.404834  238176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:19:02.414200  238176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:19:02.499166  238176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:19:02.581552  238176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:19:02.591008  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:19:02.603642  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:19:02.611212  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:19:02.619009  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:19:02.627130  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:19:02.635969  238176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:19:02.642547  238176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:19:02.649272  238176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:19:02.723710  238176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:19:02.791020  238176 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:19:02.791083  238176 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:19:02.794749  238176 start.go:472] Will wait 60s for crictl version
	I0108 21:19:02.794802  238176 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:19:02.819010  238176 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:19:02.819061  238176 ssh_runner.go:195] Run: containerd --version
	I0108 21:19:02.846351  238176 ssh_runner.go:195] Run: containerd --version
	I0108 21:19:02.873127  238176 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:19:02.874601  238176 cli_runner.go:164] Run: docker network inspect no-preload-211859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:19:02.899922  238176 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0108 21:19:02.903146  238176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:19:02.912272  238176 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:19:02.912310  238176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:19:02.935343  238176 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.3". assuming images are not preloaded.
	I0108 21:19:02.935368  238176 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 21:19:02.935433  238176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:02.935515  238176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:19:02.935526  238176 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I0108 21:19:02.935544  238176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:19:02.935531  238176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:19:02.935459  238176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:19:02.935515  238176 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:19:02.935442  238176 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I0108 21:19:02.936531  238176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.3: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:19:02.936557  238176 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:19:02.936558  238176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I0108 21:19:02.936590  238176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:02.936688  238176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.3: Error: No such image: registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:19:02.936730  238176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.3: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:19:02.936986  238176 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I0108 21:19:02.937008  238176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.3: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:19:03.065235  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.3"
	I0108 21:19:03.071045  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.3"
	I0108 21:19:03.074998  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I0108 21:19:03.085411  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I0108 21:19:03.089933  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.3"
	I0108 21:19:03.091240  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I0108 21:19:03.091377  238176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.3" needs transfer: "registry.k8s.io/kube-proxy:v1.25.3" does not exist at hash "beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041" in container runtime
	I0108 21:19:03.091415  238176 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:19:03.091462  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.093276  238176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.3" does not exist at hash "0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0" in container runtime
	I0108 21:19:03.093357  238176 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:19:03.093415  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.100802  238176 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I0108 21:19:03.100849  238176 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I0108 21:19:03.100890  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.105337  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.3"
	I0108 21:19:03.117275  238176 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I0108 21:19:03.117294  238176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.3" does not exist at hash "6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912" in container runtime
	I0108 21:19:03.117325  238176 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I0108 21:19:03.117327  238176 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:19:03.117366  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.117368  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.119859  238176 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0108 21:19:03.119885  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3
	I0108 21:19:03.119899  238176 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:19:03.119938  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.120012  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3
	I0108 21:19:03.120018  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I0108 21:19:03.127422  238176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.3" does not exist at hash "60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a" in container runtime
	I0108 21:19:03.127512  238176 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:19:03.127519  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3
	I0108 21:19:03.127526  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I0108 21:19:03.127539  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:03.215008  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
	I0108 21:19:03.215069  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0108 21:19:03.215086  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I0108 21:19:03.215147  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
	I0108 21:19:03.215154  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I0108 21:19:03.215229  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0108 21:19:03.215096  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3
	I0108 21:19:03.215610  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I0108 21:19:03.215680  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I0108 21:19:03.219137  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 21:19:03.219147  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
	I0108 21:19:03.219270  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0108 21:19:03.244926  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I0108 21:19:03.244964  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.3': No such file or directory
	I0108 21:19:03.244974  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I0108 21:19:03.244986  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I0108 21:19:03.244993  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 --> /var/lib/minikube/images/kube-proxy_v1.25.3 (20268032 bytes)
	I0108 21:19:03.244998  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I0108 21:19:03.244929  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0108 21:19:03.245089  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I0108 21:19:03.244949  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.3': No such file or directory
	I0108 21:19:03.245167  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 --> /var/lib/minikube/images/kube-apiserver_v1.25.3 (34241024 bytes)
	I0108 21:19:03.251701  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.3': No such file or directory
	I0108 21:19:03.251739  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 --> /var/lib/minikube/images/kube-scheduler_v1.25.3 (15801856 bytes)
	I0108 21:19:03.251762  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0108 21:19:03.251708  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
	I0108 21:19:03.251784  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0108 21:19:03.251844  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0108 21:19:03.327301  238176 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I0108 21:19:03.327370  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I0108 21:19:03.331904  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.3': No such file or directory
	I0108 21:19:03.331948  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 --> /var/lib/minikube/images/kube-controller-manager_v1.25.3 (31264768 bytes)
	I0108 21:19:03.563384  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I0108 21:19:03.575318  238176 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0108 21:19:03.575381  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I0108 21:19:03.708976  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 21:19:04.369453  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0108 21:19:04.369497  238176 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0108 21:19:04.369541  238176 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 21:19:04.369563  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3
	I0108 21:19:04.369584  238176 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:04.369620  238176 ssh_runner.go:195] Run: which crictl
	I0108 21:19:05.141771  238176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:05.141790  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 from cache
	I0108 21:19:05.141826  238176 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.3
	I0108 21:19:05.141869  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3
	I0108 21:19:05.166998  238176 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 21:19:05.167116  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0108 21:19:06.086626  238176 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0108 21:19:06.086663  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0108 21:19:06.086720  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 from cache
	I0108 21:19:06.086740  238176 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0108 21:19:06.086771  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3
	I0108 21:19:07.447710  238176 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3: (1.360911698s)
	I0108 21:19:07.447741  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 from cache
	I0108 21:19:07.447762  238176 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0108 21:19:07.447800  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I0108 21:19:08.628327  238176 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3: (1.180496971s)
	I0108 21:19:08.628360  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 from cache
	I0108 21:19:08.628398  238176 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I0108 21:19:08.628457  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I0108 21:19:12.117313  238176 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (3.488828819s)
	I0108 21:19:12.117339  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I0108 21:19:12.117369  238176 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 21:19:12.117410  238176 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0108 21:19:12.511935  238176 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 21:19:12.511979  238176 cache_images.go:123] Successfully loaded all cached images
	I0108 21:19:12.511985  238176 cache_images.go:92] LoadImages completed in 9.576606169s
	I0108 21:19:12.512047  238176 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:19:12.535433  238176 cni.go:95] Creating CNI manager for ""
	I0108 21:19:12.535455  238176 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:19:12.535479  238176 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:19:12.535496  238176 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-211859 NodeName:no-preload-211859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:19:12.535670  238176 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-211859"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:19:12.535779  238176 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-211859 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:19:12.535835  238176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:19:12.542873  238176 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.25.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.25.3': No such file or directory
	
	Initiating transfer...
	I0108 21:19:12.542926  238176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.25.3
	I0108 21:19:12.549971  238176 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl.sha256
	I0108 21:19:12.550000  238176 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm.sha256
	I0108 21:19:12.550060  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.3/kubectl
	I0108 21:19:12.550092  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.3/kubeadm
	I0108 21:19:12.550000  238176 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet.sha256
	I0108 21:19:12.550200  238176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:19:12.553598  238176 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.3/kubeadm': No such file or directory
	I0108 21:19:12.553624  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/linux/amd64/v1.25.3/kubeadm --> /var/lib/minikube/binaries/v1.25.3/kubeadm (43802624 bytes)
	I0108 21:19:12.553715  238176 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.3/kubectl': No such file or directory
	I0108 21:19:12.553735  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/linux/amd64/v1.25.3/kubectl --> /var/lib/minikube/binaries/v1.25.3/kubectl (45015040 bytes)
	I0108 21:19:12.562413  238176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.3/kubelet
	I0108 21:19:12.583439  238176 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.3/kubelet': No such file or directory
	I0108 21:19:12.583574  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/linux/amd64/v1.25.3/kubelet --> /var/lib/minikube/binaries/v1.25.3/kubelet (114237464 bytes)
	I0108 21:19:12.952032  238176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:19:12.959056  238176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0108 21:19:12.971962  238176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:19:12.984530  238176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0108 21:19:12.997124  238176 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:19:13.000115  238176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:19:13.009067  238176 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859 for IP: 192.168.85.2
	I0108 21:19:13.009175  238176 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:19:13.009226  238176 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:19:13.009283  238176 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.key
	I0108 21:19:13.009302  238176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.crt with IP's: []
	I0108 21:19:13.185264  238176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.crt ...
	I0108 21:19:13.185292  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.crt: {Name:mkfafee4e0b097302552449473e8f2637af513d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:13.185494  238176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.key ...
	I0108 21:19:13.185536  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.key: {Name:mke4163ae08cdcb937edd3d065ef51e9ca0800a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:13.185636  238176 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c
	I0108 21:19:13.185651  238176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:19:13.275423  238176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt.43b9df8c ...
	I0108 21:19:13.275455  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt.43b9df8c: {Name:mk2a98837da7dbd48f5c440db8a0535aa6334985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:13.275698  238176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c ...
	I0108 21:19:13.275713  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c: {Name:mk648823e604450cb2d3d99ec020cefd5ef0fb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:13.275842  238176 certs.go:320] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt
	I0108 21:19:13.275919  238176 certs.go:324] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key
	I0108 21:19:13.275984  238176 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key
	I0108 21:19:13.276008  238176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt with IP's: []
	I0108 21:19:13.512195  238176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt ...
	I0108 21:19:13.512229  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt: {Name:mk8c0e02be2a07aa48d159d2597d81e400057299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:13.512447  238176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key ...
	I0108 21:19:13.512463  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key: {Name:mk7f5cc9f6d5826bf3e7b0c771dce1deb25c5995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:13.512657  238176 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:19:13.512704  238176 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:19:13.512724  238176 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:19:13.512759  238176 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:19:13.512796  238176 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:19:13.512828  238176 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:19:13.512880  238176 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:19:13.513424  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:19:13.531737  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:19:13.548774  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:19:13.567732  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:19:13.585334  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:19:13.602835  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:19:13.620661  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:19:13.639379  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:19:13.657561  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:19:13.675729  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:19:13.693211  238176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:19:13.710374  238176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:19:13.723449  238176 ssh_runner.go:195] Run: openssl version
	I0108 21:19:13.728501  238176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:19:13.736040  238176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:19:13.739080  238176 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:19:13.739124  238176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:19:13.743808  238176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:19:13.750956  238176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:19:13.758174  238176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:19:13.761404  238176 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:19:13.761441  238176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:19:13.766173  238176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:19:13.774436  238176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:19:13.781972  238176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:19:13.784909  238176 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:19:13.784949  238176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:19:13.789627  238176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:19:13.796601  238176 kubeadm.go:396] StartCluster: {Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:19:13.796675  238176 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:19:13.796712  238176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:19:13.819712  238176 cri.go:87] found id: ""
	I0108 21:19:13.819759  238176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:19:13.826633  238176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:19:13.833534  238176 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:19:13.833582  238176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:19:13.840774  238176 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:19:13.840812  238176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:19:13.881415  238176 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:19:13.881508  238176 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:19:13.910239  238176 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:19:13.910314  238176 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:19:13.910363  238176 kubeadm.go:317] OS: Linux
	I0108 21:19:13.910433  238176 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:19:13.910501  238176 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:19:13.910571  238176 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:19:13.910623  238176 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:19:13.910701  238176 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:19:13.910775  238176 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:19:13.910845  238176 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:19:13.910903  238176 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:19:13.910991  238176 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:19:13.971849  238176 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:19:13.971975  238176 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:19:13.972098  238176 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:19:14.088498  238176 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:19:14.090679  238176 out.go:204]   - Generating certificates and keys ...
	I0108 21:19:14.090808  238176 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:19:14.090910  238176 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:19:14.395223  238176 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:19:14.627188  238176 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:19:14.813097  238176 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:19:15.103425  238176 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 21:19:15.194626  238176 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 21:19:15.194801  238176 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-211859] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0108 21:19:15.630678  238176 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 21:19:15.630843  238176 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-211859] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0108 21:19:15.684234  238176 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:19:15.883770  238176 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:19:16.052599  238176 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 21:19:16.052726  238176 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:19:16.226179  238176 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:19:16.459569  238176 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:19:16.673795  238176 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:19:17.001399  238176 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:19:17.012832  238176 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:19:17.013733  238176 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:19:17.013807  238176 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:19:17.100463  238176 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:19:17.102673  238176 out.go:204]   - Booting up control plane ...
	I0108 21:19:17.102792  238176 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:19:17.104341  238176 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:19:17.105200  238176 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:19:17.105846  238176 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:19:17.107532  238176 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:19:23.109487  238176 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.001961 seconds
	I0108 21:19:23.109639  238176 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:19:23.117006  238176 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:19:23.632934  238176 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:19:23.633175  238176 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:19:24.139912  238176 kubeadm.go:317] [bootstrap-token] Using token: v25s40.0vrvhfh0rrmienhz
	I0108 21:19:24.141682  238176 out.go:204]   - Configuring RBAC rules ...
	I0108 21:19:24.141820  238176 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:19:24.146230  238176 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:19:24.150830  238176 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:19:24.152873  238176 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:19:24.154773  238176 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:19:24.156625  238176 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:19:24.165918  238176 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:19:24.361051  238176 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:19:24.549594  238176 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:19:24.550450  238176 kubeadm.go:317] 
	I0108 21:19:24.550538  238176 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:19:24.550553  238176 kubeadm.go:317] 
	I0108 21:19:24.550635  238176 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:19:24.550643  238176 kubeadm.go:317] 
	I0108 21:19:24.550679  238176 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:19:24.550778  238176 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:19:24.550871  238176 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:19:24.550886  238176 kubeadm.go:317] 
	I0108 21:19:24.550958  238176 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:19:24.550967  238176 kubeadm.go:317] 
	I0108 21:19:24.551024  238176 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:19:24.551032  238176 kubeadm.go:317] 
	I0108 21:19:24.551086  238176 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:19:24.551213  238176 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:19:24.551316  238176 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:19:24.551327  238176 kubeadm.go:317] 
	I0108 21:19:24.551444  238176 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:19:24.551565  238176 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:19:24.551590  238176 kubeadm.go:317] 
	I0108 21:19:24.551712  238176 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token v25s40.0vrvhfh0rrmienhz \
	I0108 21:19:24.551856  238176 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:19:24.551885  238176 kubeadm.go:317] 	--control-plane 
	I0108 21:19:24.551894  238176 kubeadm.go:317] 
	I0108 21:19:24.552008  238176 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:19:24.552020  238176 kubeadm.go:317] 
	I0108 21:19:24.552148  238176 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token v25s40.0vrvhfh0rrmienhz \
	I0108 21:19:24.552256  238176 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:19:24.553931  238176 kubeadm.go:317] W0108 21:19:13.873020    1171 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:19:24.554137  238176 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:19:24.554255  238176 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:19:24.554284  238176 cni.go:95] Creating CNI manager for ""
	I0108 21:19:24.554298  238176 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:19:24.556657  238176 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:19:24.558582  238176 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:19:24.563001  238176 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:19:24.563019  238176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:19:24.620608  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:19:25.305957  238176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:19:25.306073  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_19_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:25.306079  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:25.313535  238176 ops.go:34] apiserver oom_adj: -16
	I0108 21:19:25.412971  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:25.973823  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:26.473983  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:26.973438  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:27.473967  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:27.973603  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:28.473690  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:28.973517  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:29.473828  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:29.973296  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:30.473241  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:30.973293  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:31.474166  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:31.973215  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:32.474098  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:32.973840  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:33.473590  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:33.973974  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:34.473318  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:34.973225  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:35.473278  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:35.973559  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:36.473262  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:36.973833  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:37.474165  238176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:19:37.550508  238176 kubeadm.go:1067] duration metric: took 12.244485223s to wait for elevateKubeSystemPrivileges.
	I0108 21:19:37.550547  238176 kubeadm.go:398] StartCluster complete in 23.753953515s
	I0108 21:19:37.550568  238176 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:37.550687  238176 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:19:37.552176  238176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:38.342339  238176 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:19:38.342413  238176 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:19:38.414454  238176 out.go:177] * Verifying Kubernetes components...
	I0108 21:19:38.342584  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:19:38.342596  238176 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 21:19:38.342830  238176 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:19:38.414619  238176 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:19:38.497865  238176 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	W0108 21:19:38.497889  238176 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:19:38.497889  238176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:19:38.414630  238176 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:19:38.497924  238176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	I0108 21:19:38.497974  238176 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:19:38.498455  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:19:38.498798  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:19:38.562513  238176 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:19:38.583553  238176 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:19:38.583596  238176 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:19:38.570607  238176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:19:38.572082  238176 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:19:38.583900  238176 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:19:38.584080  238176 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:19:38.622180  238176 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:19:38.622201  238176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:19:38.622248  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:38.654733  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:38.656504  238176 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:19:38.656529  238176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:19:38.656584  238176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:19:38.686702  238176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33017 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:19:38.786540  238176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:19:38.800977  238176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:19:39.025280  238176 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:19:39.343632  238176 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:19:39.345194  238176 addons.go:488] enableAddons completed in 1.002594458s
	I0108 21:19:40.811869  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:43.309832  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:45.310047  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:47.811078  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:50.310777  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:52.809954  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:54.810346  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:57.311055  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:19:59.810665  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:01.810765  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:04.309758  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:06.311087  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:08.809809  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:10.809979  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:13.309705  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:15.310111  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:17.810160  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:20.309866  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:22.310432  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:24.810906  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:27.309919  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:29.310405  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:31.809901  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:33.810421  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:36.310685  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:38.809872  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:40.810717  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:43.310062  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:45.809880  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:48.310449  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:50.310637  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:52.810262  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:54.810501  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:57.310463  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:20:59.809607  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:01.809645  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:03.809892  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:06.312183  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:08.810045  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:10.810085  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:13.309701  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:15.810341  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:17.811099  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:20.309592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.309823  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:24.809637  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.810185  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:28.810565  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:31.310002  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.809947  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:35.811248  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:38.309824  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:40.310117  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:42.310275  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:44.809516  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.809649  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:48.810315  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:51.309943  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:53.809699  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:56.310210  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:58.809748  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:00.810593  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:03.310194  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:05.809750  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:07.809939  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:10.309596  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:12.309705  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:14.310432  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:16.810413  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:19.309811  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:21.309972  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:23.810232  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:25.810410  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:28.310174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:30.310481  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:32.311174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:34.810711  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.310466  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:39.810530  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.309510  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:44.310568  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.809625  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:48.809973  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:50.810329  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:53.310062  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:55.810600  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:58.310283  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:00.310562  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:02.810458  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:05.310071  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:07.310184  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:09.810127  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:12.310383  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:14.310499  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:16.809792  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:18.810138  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:21.309926  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:23.310592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:25.810236  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:27.810585  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:30.309806  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:32.309936  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:34.809756  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:37.310339  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:38.812410  238176 node_ready.go:38] duration metric: took 4m0.228660027s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:23:38.814872  238176 out.go:177] 
	W0108 21:23:38.817068  238176 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:38.817087  238176 out.go:239] * 
	* 
	W0108 21:23:38.817914  238176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:38.820219  238176 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-211859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-211859
helpers_test.go:235: (dbg) docker inspect no-preload-211859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65",
	        "Created": "2023-01-08T21:19:00.370984432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:19:00.742893962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hosts",
	        "LogPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65-json.log",
	        "Name": "/no-preload-211859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-211859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-211859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-211859",
	                "Source": "/var/lib/docker/volumes/no-preload-211859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-211859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-211859",
	                "name.minikube.sigs.k8s.io": "no-preload-211859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6412d705758b0fa3708816e7c5f6b0b6bfa26c10bbbc6e3acea6f602d9c2dab3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6412d705758b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-211859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "23cabd631389",
	                        "no-preload-211859"
	                    ],
	                    "NetworkID": "f6ac14d41355072c0829af36f4aed661fe422e2af93237ea348f6b100ade02e6",
	                    "EndpointID": "2f14131c7e47074512e155979b67d1e3a5303bb55db398f44880c21804eebda9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859
E0108 21:23:39.250278   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-211859 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-210943                                   | pause-210943                 | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p cilium-210619 --memory=2048                    | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:12 UTC |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-210725                         | cert-expiration-210725       | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p calico-210619 --memory=2048                    | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-210619 pgrep -a                        | kindnet-210619               | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| delete  | -p kindnet-210619                                 | kindnet-210619               | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | --memory=2048                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p cilium-210619 pgrep -a                         | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | pgrep -a kubelet                                  |                              |         |         |                     |                     |
	| delete  | -p cilium-210619                                  | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p bridge-210619 --memory=2048                    | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:13 UTC |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p bridge-210619 pgrep -a                         | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:13 UTC | 08 Jan 23 21:13 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-210902                      | kubernetes-upgrade-210902    | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC | 08 Jan 23 21:18 UTC |
	| start   | -p old-k8s-version-211828                         | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC | 08 Jan 23 21:18 UTC |
	| start   | -p no-preload-211859                              | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| delete  | -p bridge-210619                                  | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| delete  | -p calico-210619                                  | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| start   | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:20 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950       | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950            | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:21:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:21:05.802454  252113 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:21:05.802654  252113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:21:05.802662  252113 out.go:309] Setting ErrFile to fd 2...
	I0108 21:21:05.802669  252113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:21:05.802789  252113 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:21:05.803305  252113 out.go:303] Setting JSON to false
	I0108 21:21:05.804864  252113 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3815,"bootTime":1673209051,"procs":557,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:21:05.804927  252113 start.go:135] virtualization: kvm guest
	I0108 21:21:05.807547  252113 out.go:177] * [embed-certs-211950] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:21:05.809328  252113 notify.go:220] Checking for updates...
	I0108 21:21:05.809354  252113 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:21:05.811105  252113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:21:05.812689  252113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:21:05.814326  252113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:21:05.815772  252113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:21:05.817523  252113 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:21:05.817884  252113 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:21:05.850227  252113 docker.go:137] docker version: linux-20.10.22
	I0108 21:21:05.850311  252113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:21:05.950357  252113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:21:05.870996193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:21:05.950457  252113 docker.go:254] overlay module found
	I0108 21:21:05.952625  252113 out.go:177] * Using the docker driver based on existing profile
	I0108 21:21:05.953952  252113 start.go:294] selected driver: docker
	I0108 21:21:05.953965  252113 start.go:838] validating driver "docker" against &{Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:05.954060  252113 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:21:05.954880  252113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:21:06.055295  252113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:21:05.976172276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:21:06.055595  252113 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:21:06.055620  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:06.055628  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:06.055645  252113 start_flags.go:317] config:
	{Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:06.057844  252113 out.go:177] * Starting control plane node embed-certs-211950 in cluster embed-certs-211950
	I0108 21:21:06.059242  252113 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:21:06.060605  252113 out.go:177] * Pulling base image ...
	I0108 21:21:06.061894  252113 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:21:06.061922  252113 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:21:06.061940  252113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:21:06.061952  252113 cache.go:57] Caching tarball of preloaded images
	I0108 21:21:06.062182  252113 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:21:06.062204  252113 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:21:06.062345  252113 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/config.json ...
	I0108 21:21:06.088100  252113 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:21:06.088123  252113 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:21:06.088154  252113 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:21:06.088191  252113 start.go:364] acquiring machines lock for embed-certs-211950: {Name:mk0bdd56e7ab57c1368c3e82ee515d1652a3526b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:21:06.088291  252113 start.go:368] acquired machines lock for "embed-certs-211950" in 77.123µs
	I0108 21:21:06.088316  252113 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:21:06.088321  252113 fix.go:55] fixHost starting: 
	I0108 21:21:06.088519  252113 cli_runner.go:164] Run: docker container inspect embed-certs-211950 --format={{.State.Status}}
	I0108 21:21:06.113908  252113 fix.go:103] recreateIfNeeded on embed-certs-211950: state=Stopped err=<nil>
	W0108 21:21:06.113938  252113 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:21:06.116212  252113 out.go:177] * Restarting existing docker container for "embed-certs-211950" ...
	I0108 21:21:04.447959  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:06.449102  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:05.102742  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:07.602250  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:06.312183  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:08.810045  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:06.117745  252113 cli_runner.go:164] Run: docker start embed-certs-211950
	I0108 21:21:06.475305  252113 cli_runner.go:164] Run: docker container inspect embed-certs-211950 --format={{.State.Status}}
	I0108 21:21:06.503725  252113 kic.go:415] container "embed-certs-211950" state is running.
	I0108 21:21:06.504129  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:06.530036  252113 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/config.json ...
	I0108 21:21:06.530275  252113 machine.go:88] provisioning docker machine ...
	I0108 21:21:06.530298  252113 ubuntu.go:169] provisioning hostname "embed-certs-211950"
	I0108 21:21:06.530340  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:06.559072  252113 main.go:134] libmachine: Using SSH client type: native
	I0108 21:21:06.559258  252113 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33032 <nil> <nil>}
	I0108 21:21:06.559273  252113 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-211950 && echo "embed-certs-211950" | sudo tee /etc/hostname
	I0108 21:21:06.559914  252113 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56878->127.0.0.1:33032: read: connection reset by peer
	I0108 21:21:09.684380  252113 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-211950
	
	I0108 21:21:09.684467  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:09.708681  252113 main.go:134] libmachine: Using SSH client type: native
	I0108 21:21:09.708844  252113 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33032 <nil> <nil>}
	I0108 21:21:09.708871  252113 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-211950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-211950/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-211950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:21:09.827124  252113 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:21:09.827161  252113 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:21:09.827192  252113 ubuntu.go:177] setting up certificates
	I0108 21:21:09.827204  252113 provision.go:83] configureAuth start
	I0108 21:21:09.827263  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:09.852817  252113 provision.go:138] copyHostCerts
	I0108 21:21:09.852880  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:21:09.852893  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:21:09.852963  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:21:09.853060  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:21:09.853069  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:21:09.853093  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:21:09.853148  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:21:09.853158  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:21:09.853182  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:21:09.853235  252113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.embed-certs-211950 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-211950]
	I0108 21:21:09.920653  252113 provision.go:172] copyRemoteCerts
	I0108 21:21:09.920714  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:21:09.920750  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:09.947707  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.030903  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:21:10.048184  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:21:10.065587  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:21:10.083096  252113 provision.go:86] duration metric: configureAuth took 255.875528ms
	I0108 21:21:10.083135  252113 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:21:10.083333  252113 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:21:10.083347  252113 machine.go:91] provisioned docker machine in 3.553058016s
	I0108 21:21:10.083354  252113 start.go:300] post-start starting for "embed-certs-211950" (driver="docker")
	I0108 21:21:10.083362  252113 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:21:10.083415  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:21:10.083452  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.109702  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.195016  252113 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:21:10.197818  252113 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:21:10.197840  252113 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:21:10.197851  252113 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:21:10.197857  252113 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:21:10.197865  252113 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:21:10.197912  252113 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:21:10.197977  252113 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:21:10.198052  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:21:10.204746  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:21:10.222465  252113 start.go:303] post-start completed in 139.096583ms
	I0108 21:21:10.222528  252113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:21:10.222583  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.248489  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.332052  252113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:21:10.335995  252113 fix.go:57] fixHost completed within 4.247669326s
	I0108 21:21:10.336018  252113 start.go:83] releasing machines lock for "embed-certs-211950", held for 4.247709743s
	I0108 21:21:10.336091  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:10.362577  252113 ssh_runner.go:195] Run: cat /version.json
	I0108 21:21:10.362643  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.362655  252113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:21:10.362722  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.389523  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.390135  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.474910  252113 ssh_runner.go:195] Run: systemctl --version
	I0108 21:21:10.503672  252113 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:21:10.515405  252113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:21:10.525294  252113 docker.go:189] disabling docker service ...
	I0108 21:21:10.525338  252113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:21:10.535021  252113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:21:10.543823  252113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:21:10.626580  252113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:21:10.703580  252113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:21:10.712815  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:21:10.725307  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:21:10.733204  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:21:10.742003  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:21:10.749989  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:21:10.757996  252113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:21:10.764350  252113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:21:10.770752  252113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:21:08.948177  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:10.948447  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:09.602390  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:12.102789  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:10.810085  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:13.309701  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:10.843690  252113 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:21:10.910489  252113 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:21:10.910563  252113 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:21:10.914322  252113 start.go:472] Will wait 60s for crictl version
	I0108 21:21:10.914382  252113 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:21:10.939459  252113 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:21:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:21:13.448462  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:15.448772  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:17.948651  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:14.602345  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:17.102934  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:15.810341  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:17.811099  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:21.986836  252113 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:21:22.009302  252113 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:21:22.009370  252113 ssh_runner.go:195] Run: containerd --version
	I0108 21:21:22.032318  252113 ssh_runner.go:195] Run: containerd --version
	I0108 21:21:22.057723  252113 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:21:20.448682  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:22.948378  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:19.602327  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:21.602862  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:20.309592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.309823  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.059129  252113 cli_runner.go:164] Run: docker network inspect embed-certs-211950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:21:22.082721  252113 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0108 21:21:22.086120  252113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:21:22.095607  252113 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:21:22.095676  252113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:21:22.119292  252113 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:21:22.119314  252113 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:21:22.119353  252113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:21:22.144549  252113 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:21:22.144574  252113 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:21:22.144617  252113 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:21:22.169507  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:22.169531  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:22.169546  252113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:21:22.169563  252113 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-211950 NodeName:embed-certs-211950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:21:22.169743  252113 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-211950"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:21:22.169858  252113 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-211950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:21:22.169918  252113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:21:22.177488  252113 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:21:22.177552  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:21:22.184516  252113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
	I0108 21:21:22.197565  252113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:21:22.210079  252113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0108 21:21:22.222327  252113 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:21:22.225285  252113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:21:22.234190  252113 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950 for IP: 192.168.94.2
	I0108 21:21:22.234285  252113 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:21:22.234322  252113 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:21:22.234389  252113 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/client.key
	I0108 21:21:22.234443  252113 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.key.ad8e880a
	I0108 21:21:22.234517  252113 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.key
	I0108 21:21:22.234619  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:21:22.234647  252113 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:21:22.234656  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:21:22.234690  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:21:22.234715  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:21:22.234739  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:21:22.234776  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:21:22.235406  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:21:22.252804  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:21:22.269489  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:21:22.286176  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:21:22.302881  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:21:22.319924  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:21:22.336527  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:21:22.353096  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:21:22.369684  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:21:22.386382  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:21:22.403589  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:21:22.422540  252113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:21:22.434954  252113 ssh_runner.go:195] Run: openssl version
	I0108 21:21:22.439875  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:21:22.447293  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.450515  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.450562  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.455232  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:21:22.461900  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:21:22.469022  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.471993  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.472043  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.476628  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:21:22.483089  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:21:22.490167  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.493388  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.493425  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.498191  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:21:22.505075  252113 kubeadm.go:396] StartCluster: {Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:22.505169  252113 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:21:22.505219  252113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:21:22.530247  252113 cri.go:87] found id: "89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3"
	I0108 21:21:22.530269  252113 cri.go:87] found id: "d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804"
	I0108 21:21:22.530276  252113 cri.go:87] found id: "8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1"
	I0108 21:21:22.530282  252113 cri.go:87] found id: "deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557"
	I0108 21:21:22.530288  252113 cri.go:87] found id: "96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f"
	I0108 21:21:22.530294  252113 cri.go:87] found id: "0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b"
	I0108 21:21:22.530300  252113 cri.go:87] found id: "661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64"
	I0108 21:21:22.530305  252113 cri.go:87] found id: "a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a"
	I0108 21:21:22.530311  252113 cri.go:87] found id: ""
	I0108 21:21:22.530349  252113 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:21:22.542527  252113 cri.go:114] JSON = null
	W0108 21:21:22.542587  252113 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0108 21:21:22.542631  252113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:21:22.550243  252113 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:21:22.550264  252113 kubeadm.go:627] restartCluster start
	I0108 21:21:22.550299  252113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:21:22.557319  252113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.558314  252113 kubeconfig.go:135] verify returned: extract IP: "embed-certs-211950" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:21:22.558783  252113 kubeconfig.go:146] "embed-certs-211950" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:21:22.559413  252113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:21:22.560901  252113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:21:22.567580  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.567625  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.575328  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.775525  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.775607  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.784331  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.975569  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.975661  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.984395  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.175545  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.175618  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.184269  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.375514  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.375606  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.384151  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.576476  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.576564  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.585154  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.776477  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.776559  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.785115  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.976398  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.976477  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.985629  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.175955  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.176027  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.185012  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.376357  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.376419  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.385370  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.575561  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.575652  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.584295  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.775523  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.775587  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.783989  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.976277  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.976357  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.984953  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.176244  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.176331  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.184911  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.376222  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.376301  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.385465  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.575785  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.575879  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.584484  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.584506  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.584548  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.592781  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.592805  252113 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:21:25.592811  252113 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:21:25.592822  252113 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:21:25.592860  252113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:21:25.618121  252113 cri.go:87] found id: "89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3"
	I0108 21:21:25.618143  252113 cri.go:87] found id: "d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804"
	I0108 21:21:25.618150  252113 cri.go:87] found id: "8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1"
	I0108 21:21:25.618156  252113 cri.go:87] found id: "deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557"
	I0108 21:21:25.618162  252113 cri.go:87] found id: "96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f"
	I0108 21:21:25.618168  252113 cri.go:87] found id: "0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b"
	I0108 21:21:25.618174  252113 cri.go:87] found id: "661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64"
	I0108 21:21:25.618180  252113 cri.go:87] found id: "a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a"
	I0108 21:21:25.618186  252113 cri.go:87] found id: ""
	I0108 21:21:25.618194  252113 cri.go:232] Stopping containers: [89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3 d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804 8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1 deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557 96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f 0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b 661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64 a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a]
	I0108 21:21:25.618232  252113 ssh_runner.go:195] Run: which crictl
	I0108 21:21:25.621048  252113 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3 d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804 8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1 deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557 96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f 0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b 661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64 a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a
	I0108 21:21:25.647817  252113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:21:25.657541  252113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:21:25.664561  252113 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:21:25.664619  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:21:25.671011  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:21:25.677375  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:21:25.683797  252113 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.683846  252113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:21:25.689922  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:21:25.696159  252113 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.696204  252113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:21:25.702527  252113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:21:25.708916  252113 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:21:25.708938  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:25.752274  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:25.448237  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:27.948001  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:24.102717  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:26.602573  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:24.809637  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.810185  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:28.810565  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.771186  252113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018881983s)
	I0108 21:21:26.771221  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:26.910605  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:26.962648  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:27.049416  252113 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:21:27.049533  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:27.614351  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:28.113890  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:28.125589  252113 api_server.go:71] duration metric: took 1.076175741s to wait for apiserver process to appear ...
	I0108 21:21:28.125678  252113 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:21:28.125706  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:28.126079  252113 api_server.go:268] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0108 21:21:28.626473  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:29.948452  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.948574  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.619403  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0108 21:21:31.619437  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0108 21:21:31.626775  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:31.712269  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:31.712321  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:32.126802  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:32.131550  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:32.131592  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:32.627202  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:32.632820  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:32.632854  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:33.126355  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:33.132259  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:21:33.140648  252113 api_server.go:140] control plane version: v1.25.3
	I0108 21:21:33.140683  252113 api_server.go:130] duration metric: took 5.014986172s to wait for apiserver health ...
	I0108 21:21:33.140697  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:33.140707  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:33.143250  252113 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:21:29.102196  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.102881  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.310002  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.809947  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.145039  252113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:21:33.149495  252113 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:21:33.149517  252113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:21:33.165823  252113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:21:34.423055  252113 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.257190006s)
	I0108 21:21:34.423131  252113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:21:34.431806  252113 system_pods.go:59] 9 kube-system pods found
	I0108 21:21:34.431843  252113 system_pods.go:61] "coredns-565d847f94-phg9v" [2a976fdd-21b3-4dee-a33c-ccd2c57d8be9] Running
	I0108 21:21:34.431856  252113 system_pods.go:61] "etcd-embed-certs-211950" [4971d596-11e2-4364-a509-52a06bf77e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:21:34.431864  252113 system_pods.go:61] "kindnet-26wwc" [02f0fed5-e625-4740-aa5e-d77817ca124b] Running
	I0108 21:21:34.431884  252113 system_pods.go:61] "kube-apiserver-embed-certs-211950" [ba0d2dbe-2dbb-4a40-b2cc-da82f163d7f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:21:34.431900  252113 system_pods.go:61] "kube-controller-manager-embed-certs-211950" [0877b5ea-137d-4d80-a5d2-fd95544ba3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:21:34.431916  252113 system_pods.go:61] "kube-proxy-ggxgh" [1bd15143-26d2-4a26-a52e-362676c5397b] Running
	I0108 21:21:34.431928  252113 system_pods.go:61] "kube-scheduler-embed-certs-211950" [41b9dede-c0fb-4644-8fa3-51d3eccd950b] Running
	I0108 21:21:34.431942  252113 system_pods.go:61] "metrics-server-5c8fd5cf8-szzjr" [488ef49e-82e4-443b-8f03-3726c44719af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:21:34.431954  252113 system_pods.go:61] "storage-provisioner" [024b335d-c262-457c-8773-924e20b66407] Running
	I0108 21:21:34.431962  252113 system_pods.go:74] duration metric: took 8.820242ms to wait for pod list to return data ...
	I0108 21:21:34.431976  252113 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:21:34.436028  252113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:21:34.436084  252113 node_conditions.go:123] node cpu capacity is 8
	I0108 21:21:34.436102  252113 node_conditions.go:105] duration metric: took 4.121302ms to run NodePressure ...
	I0108 21:21:34.436123  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:34.580079  252113 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:21:34.583873  252113 kubeadm.go:778] kubelet initialised
	I0108 21:21:34.583892  252113 kubeadm.go:779] duration metric: took 3.792429ms waiting for restarted kubelet to initialise ...
	I0108 21:21:34.583900  252113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:21:34.589069  252113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-phg9v" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.593053  252113 pod_ready.go:92] pod "coredns-565d847f94-phg9v" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:34.593070  252113 pod_ready.go:81] duration metric: took 3.977273ms waiting for pod "coredns-565d847f94-phg9v" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.593079  252113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.448121  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:36.947638  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:33.602189  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.602598  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:38.102223  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.811248  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:38.309824  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:36.603328  252113 pod_ready.go:102] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:39.102636  252113 pod_ready.go:102] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:39.448188  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:41.448816  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:40.602552  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:43.103011  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:40.310117  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:42.310275  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:41.102721  252113 pod_ready.go:92] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:41.102749  252113 pod_ready.go:81] duration metric: took 6.509663521s waiting for pod "etcd-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.102765  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.107086  252113 pod_ready.go:92] pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:41.107102  252113 pod_ready.go:81] duration metric: took 4.330679ms waiting for pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.107110  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:43.117124  252113 pod_ready.go:102] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:45.616162  252113 pod_ready.go:102] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:43.947639  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.948111  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:47.948466  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.603311  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:48.102009  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:44.809516  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.809649  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:48.810315  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.116423  252113 pod_ready.go:92] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:46.116450  252113 pod_ready.go:81] duration metric: took 5.00933349s waiting for pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.116461  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggxgh" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.120605  252113 pod_ready.go:92] pod "kube-proxy-ggxgh" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:46.120624  252113 pod_ready.go:81] duration metric: took 4.157414ms waiting for pod "kube-proxy-ggxgh" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.120633  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:47.630047  252113 pod_ready.go:92] pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:47.630074  252113 pod_ready.go:81] duration metric: took 1.509435424s waiting for pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:47.630084  252113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:49.639550  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:50.447460  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:52.448611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:50.102594  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:52.601892  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:51.309943  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:53.809699  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:52.139845  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:54.639170  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:54.947665  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:56.947700  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:54.602771  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:57.101897  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:56.310210  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:58.809748  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:57.141151  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:59.639435  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:58.949756  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:01.448451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:59.101923  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:01.101962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:03.102909  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:00.810593  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:03.310194  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:02.139593  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:04.639219  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:03.947604  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.948211  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.602550  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:07.602641  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:05.809750  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:07.809939  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:06.639683  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:09.139384  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:08.447451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.448497  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:12.948218  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.102194  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:12.102500  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:10.309596  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:12.309705  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:11.140038  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:13.639053  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:15.639818  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:15.449977  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:17.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:14.102962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:16.602092  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:14.310432  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:16.810413  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:18.139157  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:20.139713  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:19.947707  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:21.948479  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:19.102762  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:21.602905  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:19.309811  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:21.309972  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:23.810232  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:22.140404  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:24.142445  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:24.447621  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:26.448004  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:24.102645  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:26.602186  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:25.810410  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:28.310174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:26.639220  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:28.640090  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:28.947732  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:31.448269  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:28.602252  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:31.102800  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:30.310481  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:32.311174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:31.139684  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:33.140111  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:35.639008  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:33.948439  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:36.448349  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:33.602137  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:36.101829  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:38.102615  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:34.810711  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.310466  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.639384  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:39.639644  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:38.948577  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:41.447813  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:40.102951  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:42.104260  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:39.810530  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.309510  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.141404  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:44.639565  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:43.448406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:45.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:47.948164  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:44.602836  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:47.102043  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:44.310568  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.809625  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:48.809973  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.640262  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:49.139383  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:50.448245  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:52.948450  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:49.102979  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:51.601953  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:50.810329  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:53.310062  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:51.639284  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:54.139362  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:55.447735  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:57.448306  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:53.602267  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.602823  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:58.101977  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.810600  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:58.310283  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:56.139671  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:58.639895  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:59.448595  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:01.448628  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:00.102056  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:02.602631  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:00.310562  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:02.810458  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:01.139847  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:03.140497  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:05.639659  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:05.102809  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:06.104150  234278 node_ready.go:38] duration metric: took 4m0.01108953s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:23:06.106363  234278 out.go:177] 
	W0108 21:23:06.107917  234278 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:06.107934  234278 out.go:239] * 
	W0108 21:23:06.108813  234278 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:06.110698  234278 out.go:177] 
	I0108 21:23:03.948469  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:06.448911  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:05.310071  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:07.310184  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:08.140432  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:10.639318  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:08.947910  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:11.447912  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:09.810127  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:12.310383  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:12.640406  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:15.138954  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:13.448469  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:15.947494  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:17.948406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:14.310499  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:16.809792  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:18.810138  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:17.141830  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:19.639941  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:20.447825  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:22.448183  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:21.309926  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:23.310592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:22.139220  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:24.139989  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:24.448405  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:26.948163  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:25.810236  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:27.810585  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:26.639866  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:29.140483  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:29.447764  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:31.448133  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:30.309806  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:32.309936  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:31.140935  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:33.639522  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:33.448453  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:35.947611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:37.947842  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:34.809756  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:37.310339  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:38.812410  238176 node_ready.go:38] duration metric: took 4m0.228660027s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:23:38.814872  238176 out.go:177] 
	W0108 21:23:38.817068  238176 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:38.817087  238176 out.go:239] * 
	W0108 21:23:38.817914  238176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:38.820219  238176 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3b86738431af4       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   de969308cd0da
	01444440cdfa7       d6e3e26021b60       3 minutes ago        Exited              kindnet-cni               0                   de969308cd0da
	640b6f75f7dac       beaaf00edd38a       4 minutes ago        Running             kube-proxy                0                   40574ad3062a4
	7b61203838e94       6d23ec0e8b87e       4 minutes ago        Running             kube-scheduler            0                   62c93aab5d432
	e5292d3c9357a       0346dbd74bcb9       4 minutes ago        Running             kube-apiserver            0                   569500f001a7b
	4777a2f6ea154       a8a176a5d5d69       4 minutes ago        Running             etcd                      0                   40770a6daff3e
	1c6e8899fc497       6039992312758       4 minutes ago        Running             kube-controller-manager   0                   594a31cb2e0e2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:19:01 UTC, end at Sun 2023-01-08 21:23:39 UTC. --
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.048699494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d pid=2242 runtime=io.containerd.runc.v2
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.131150483Z" level=info msg="CreateContainer within sandbox \"40574ad3062a4261969544202438e1e5a6c0817ca88b67e7877d3cbe4e719683\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6\""
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.132103412Z" level=info msg="StartContainer for \"640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6\""
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.251429533Z" level=info msg="StartContainer for \"640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6\" returns successfully"
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.511880424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-vh4hl,Uid:c002c329-15ad-4066-8f90-bee3d9d18431,Namespace:kube-system,Attempt:0,} returns sandbox id \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\""
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.513927019Z" level=info msg="PullImage \"kindest/kindnetd:v20221004-44d545d1\""
	Jan 08 21:19:39 no-preload-211859 containerd[513]: time="2023-01-08T21:19:39.516021297Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 21:19:40 no-preload-211859 containerd[513]: time="2023-01-08T21:19:40.200498292Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.878233054Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd:v20221004-44d545d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.880695293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.883078403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kindest/kindnetd:v20221004-44d545d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.885277279Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.885695315Z" level=info msg="PullImage \"kindest/kindnetd:v20221004-44d545d1\" returns image reference \"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f\""
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.887664864Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.902007301Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\""
	Jan 08 21:19:41 no-preload-211859 containerd[513]: time="2023-01-08T21:19:41.902572314Z" level=info msg="StartContainer for \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\""
	Jan 08 21:19:42 no-preload-211859 containerd[513]: time="2023-01-08T21:19:42.030345576Z" level=info msg="StartContainer for \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\" returns successfully"
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.461473083Z" level=info msg="shim disconnected" id=01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.461552220Z" level=warning msg="cleaning up after shim disconnected" id=01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38 namespace=k8s.io
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.461569868Z" level=info msg="cleaning up dead shim"
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.470396935Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:22:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2534 runtime=io.containerd.runc.v2\n"
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.868849807Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.884682887Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\""
	Jan 08 21:22:22 no-preload-211859 containerd[513]: time="2023-01-08T21:22:22.885181322Z" level=info msg="StartContainer for \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\""
	Jan 08 21:22:23 no-preload-211859 containerd[513]: time="2023-01-08T21:22:23.025239876Z" level=info msg="StartContainer for \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-211859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-211859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=no-preload-211859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_19_25_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:19:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-211859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:23:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:19:54 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:19:54 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:19:54 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:19:54 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-211859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                1811e86e-6254-4928-9c37-fe78bdd2d83e
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-211859                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-vh4hl                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-no-preload-211859             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-no-preload-211859    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-proxy-zb6wz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-no-preload-211859             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s                  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s                  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s                  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node no-preload-211859 event: Registered Node no-preload-211859 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf] <==
	* {"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[2074924048] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"106.524537ms","start":"2023-01-08T21:19:38.234Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[2074924048] 'process raft request'  (duration: 106.421179ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[226235715] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"103.575038ms","start":"2023-01-08T21:19:38.237Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[226235715] 'process raft request'  (duration: 103.53261ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[19672861] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"106.056222ms","start":"2023-01-08T21:19:38.234Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[19672861] 'process raft request'  (duration: 105.814606ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[1129527188] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"104.142429ms","start":"2023-01-08T21:19:38.236Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[1129527188] 'process raft request'  (duration: 103.925251ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.501086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-vh4hl\" ","response":"range_response_count:1 size:3686"}
	{"level":"info","ts":"2023-01-08T21:19:38.341Z","caller":"traceutil/trace.go:171","msg":"trace[246398554] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-vh4hl; range_end:; response_count:1; response_revision:337; }","duration":"108.584023ms","start":"2023-01-08T21:19:38.232Z","end":"2023-01-08T21:19:38.341Z","steps":["trace[246398554] 'agreement among raft nodes before linearized reading'  (duration: 108.494697ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.476Z","caller":"traceutil/trace.go:171","msg":"trace[882092600] linearizableReadLoop","detail":"{readStateIndex:348; appliedIndex:348; }","duration":"129.175544ms","start":"2023-01-08T21:19:38.346Z","end":"2023-01-08T21:19:38.475Z","steps":["trace[882092600] 'read index received'  (duration: 129.166063ms)","trace[882092600] 'applied index is now lower than readState.Index'  (duration: 8.426µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.558Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.073643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2023-01-08T21:19:38.558Z","caller":"traceutil/trace.go:171","msg":"trace[1428972422] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"212.170328ms","start":"2023-01-08T21:19:38.346Z","end":"2023-01-08T21:19:38.558Z","steps":["trace[1428972422] 'agreement among raft nodes before linearized reading'  (duration: 129.289591ms)","trace[1428972422] 'range keys from in-memory index tree'  (duration: 82.642207ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.558Z","caller":"traceutil/trace.go:171","msg":"trace[1363471911] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"211.643128ms","start":"2023-01-08T21:19:38.347Z","end":"2023-01-08T21:19:38.558Z","steps":["trace[1363471911] 'process raft request'  (duration: 128.755978ms)","trace[1363471911] 'compare'  (duration: 82.707271ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.559Z","caller":"traceutil/trace.go:171","msg":"trace[468553146] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"210.55068ms","start":"2023-01-08T21:19:38.348Z","end":"2023-01-08T21:19:38.559Z","steps":["trace[468553146] 'process raft request'  (duration: 210.431063ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.559Z","caller":"traceutil/trace.go:171","msg":"trace[1822188889] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"206.78465ms","start":"2023-01-08T21:19:38.352Z","end":"2023-01-08T21:19:38.559Z","steps":["trace[1822188889] 'process raft request'  (duration: 206.647266ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.560Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.251419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2023-01-08T21:19:38.560Z","caller":"traceutil/trace.go:171","msg":"trace[1349132424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:341; }","duration":"151.305603ms","start":"2023-01-08T21:19:38.409Z","end":"2023-01-08T21:19:38.560Z","steps":["trace[1349132424] 'agreement among raft nodes before linearized reading'  (duration: 151.227745ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"215.509358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-565d847f94\" ","response":"range_response_count:1 size:3685"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[390813499] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-565d847f94; range_end:; response_count:1; response_revision:349; }","duration":"215.601636ms","start":"2023-01-08T21:19:38.587Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[390813499] 'agreement among raft nodes before linearized reading'  (duration: 122.1355ms)","trace[390813499] 'range keys from in-memory index tree'  (duration: 93.336213ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[1873919722] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"179.476225ms","start":"2023-01-08T21:19:38.623Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[1873919722] 'process raft request'  (duration: 179.422915ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[475140130] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"180.612361ms","start":"2023-01-08T21:19:38.622Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[475140130] 'process raft request'  (duration: 87.149705ms)","trace[475140130] 'compare'  (duration: 93.28857ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"203.972372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[817869447] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:349; }","duration":"204.287434ms","start":"2023-01-08T21:19:38.599Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[817869447] 'agreement among raft nodes before linearized reading'  (duration: 110.561756ms)","trace[817869447] 'range keys from in-memory index tree'  (duration: 93.388837ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.754398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-211859\" ","response":"range_response_count:1 size:3712"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[2130402905] range","detail":"{range_begin:/registry/minions/no-preload-211859; range_end:; response_count:1; response_revision:349; }","duration":"218.239404ms","start":"2023-01-08T21:19:38.585Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[2130402905] 'agreement among raft nodes before linearized reading'  (duration: 124.368132ms)","trace[2130402905] 'range keys from in-memory index tree'  (duration: 93.348555ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.809Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"189.506275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2023-01-08T21:19:38.809Z","caller":"traceutil/trace.go:171","msg":"trace[456320770] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:351; }","duration":"189.575792ms","start":"2023-01-08T21:19:38.619Z","end":"2023-01-08T21:19:38.809Z","steps":["trace[456320770] 'agreement among raft nodes before linearized reading'  (duration: 189.459866ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:39.011Z","caller":"traceutil/trace.go:171","msg":"trace[457235364] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"163.339053ms","start":"2023-01-08T21:19:38.847Z","end":"2023-01-08T21:19:39.011Z","steps":["trace[457235364] 'process raft request'  (duration: 74.231725ms)","trace[457235364] 'compare'  (duration: 88.976454ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:23:40 up  1:06,  0 users,  load average: 0.44, 1.13, 1.64
	Linux no-preload-211859 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659] <==
	* I0108 21:19:21.742510       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:19:21.809931       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:19:21.821241       1 controller.go:616] quota admission added evaluator for: namespaces
	I0108 21:19:21.831952       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:19:21.832425       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:19:21.832531       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:19:21.833141       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:19:21.839440       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:19:22.504449       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:19:22.736509       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:19:22.739503       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:19:22.739524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:19:23.047398       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:19:23.077104       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:19:23.130858       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0108 21:19:23.135237       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0108 21:19:23.136245       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:19:23.139902       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:19:23.748461       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:19:24.353261       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:19:24.359899       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0108 21:19:24.366666       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:24.431848       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:19:37.979307       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:37.979567       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42] <==
	* I0108 21:19:37.147526       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0108 21:19:37.147538       1 taint_manager.go:209] "Sending events to api server"
	W0108 21:19:37.147611       1 node_lifecycle_controller.go:1058] Missing timestamp for Node no-preload-211859. Assuming now as a timestamp.
	I0108 21:19:37.147654       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0108 21:19:37.147730       1 event.go:294] "Event occurred" object="no-preload-211859" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-211859 event: Registered Node no-preload-211859 in Controller"
	I0108 21:19:37.148367       1 shared_informer.go:262] Caches are synced for GC
	I0108 21:19:37.179634       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:19:37.190768       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 21:19:37.198281       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:19:37.198284       1 shared_informer.go:262] Caches are synced for cronjob
	I0108 21:19:37.203901       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:19:37.215008       1 shared_informer.go:262] Caches are synced for ephemeral
	I0108 21:19:37.240454       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 21:19:37.244863       1 shared_informer.go:262] Caches are synced for PVC protection
	I0108 21:19:37.249282       1 shared_informer.go:262] Caches are synced for persistent volume
	I0108 21:19:37.623965       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:19:37.698165       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:19:37.698192       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:38.152229       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0108 21:19:38.156739       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vh4hl"
	I0108 21:19:38.232916       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zb6wz"
	I0108 21:19:38.562867       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-6zc6h"
	I0108 21:19:38.563087       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0108 21:19:38.584331       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-jw8vf"
	I0108 21:19:38.832932       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-6zc6h"
	
	* 
	* ==> kube-proxy [640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6] <==
	* I0108 21:19:39.345755       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0108 21:19:39.345825       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0108 21:19:39.345855       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:19:39.365639       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:19:39.365673       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:19:39.365686       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:19:39.365706       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:19:39.365730       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:19:39.365898       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:19:39.366196       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:19:39.366220       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:19:39.366849       1 config.go:444] "Starting node config controller"
	I0108 21:19:39.366868       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:19:39.367048       1 config.go:317] "Starting service config controller"
	I0108 21:19:39.367072       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:19:39.367244       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:19:39.367262       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:19:39.467069       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:19:39.467273       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:19:39.467307       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1] <==
	* W0108 21:19:21.827015       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:21.827283       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:19:21.827006       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:19:21.827309       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:19:21.827107       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:19:21.827324       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:19:21.827566       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:21.827589       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:22.668548       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:19:22.668587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:19:22.676570       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:19:22.676605       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:19:22.742158       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:22.742193       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:22.795464       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:19:22.795534       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:19:22.816566       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:19:22.816605       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:19:22.836431       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:19:22.836467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:19:22.885562       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:19:22.885594       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:19:22.897861       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:22.897897       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 21:19:25.223767       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:19:01 UTC, end at Sun 2023-01-08 21:23:40 UTC. --
	Jan 08 21:21:44 no-preload-211859 kubelet[1743]: E0108 21:21:44.683213    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:21:49 no-preload-211859 kubelet[1743]: E0108 21:21:49.684557    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:21:54 no-preload-211859 kubelet[1743]: E0108 21:21:54.686091    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:21:59 no-preload-211859 kubelet[1743]: E0108 21:21:59.687022    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:04 no-preload-211859 kubelet[1743]: E0108 21:22:04.688504    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:09 no-preload-211859 kubelet[1743]: E0108 21:22:09.690077    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:14 no-preload-211859 kubelet[1743]: E0108 21:22:14.691311    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:19 no-preload-211859 kubelet[1743]: E0108 21:22:19.692746    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:22 no-preload-211859 kubelet[1743]: I0108 21:22:22.866538    1743 scope.go:115] "RemoveContainer" containerID="01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38"
	Jan 08 21:22:24 no-preload-211859 kubelet[1743]: E0108 21:22:24.693582    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:29 no-preload-211859 kubelet[1743]: E0108 21:22:29.695256    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:34 no-preload-211859 kubelet[1743]: E0108 21:22:34.696900    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:39 no-preload-211859 kubelet[1743]: E0108 21:22:39.697854    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:44 no-preload-211859 kubelet[1743]: E0108 21:22:44.699076    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:49 no-preload-211859 kubelet[1743]: E0108 21:22:49.700179    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:54 no-preload-211859 kubelet[1743]: E0108 21:22:54.701629    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:59 no-preload-211859 kubelet[1743]: E0108 21:22:59.703000    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:04 no-preload-211859 kubelet[1743]: E0108 21:23:04.703930    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:09 no-preload-211859 kubelet[1743]: E0108 21:23:09.705507    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:14 no-preload-211859 kubelet[1743]: E0108 21:23:14.706455    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:19 no-preload-211859 kubelet[1743]: E0108 21:23:19.707650    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:24 no-preload-211859 kubelet[1743]: E0108 21:23:24.708527    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:29 no-preload-211859 kubelet[1743]: E0108 21:23:29.709078    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:34 no-preload-211859 kubelet[1743]: E0108 21:23:34.710089    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:39 no-preload-211859 kubelet[1743]: E0108 21:23:39.711698    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-211859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-jw8vf storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-211859 describe pod coredns-565d847f94-jw8vf storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-211859 describe pod coredns-565d847f94-jw8vf storage-provisioner: exit status 1 (60.936369ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-jw8vf" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-211859 describe pod coredns-565d847f94-jw8vf storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (281.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (288.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-211952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:20:15.378199   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:20:23.145032   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-211952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: exit status 80 (4m47.08077108s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:19:52.956462  245190 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:19:52.956678  245190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:19:52.956689  245190 out.go:309] Setting ErrFile to fd 2...
	I0108 21:19:52.956695  245190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:19:52.956804  245190 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:19:52.957426  245190 out.go:303] Setting JSON to false
	I0108 21:19:52.958701  245190 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3742,"bootTime":1673209051,"procs":431,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:19:52.958763  245190 start.go:135] virtualization: kvm guest
	I0108 21:19:52.961689  245190 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:19:52.963648  245190 notify.go:220] Checking for updates...
	I0108 21:19:52.965140  245190 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:19:52.966919  245190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:19:52.968685  245190 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:19:52.970277  245190 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:19:52.971976  245190 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:19:52.974276  245190 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:19:52.974449  245190 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:19:52.974595  245190 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:19:52.974655  245190 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:19:53.012390  245190 docker.go:137] docker version: linux-20.10.22
	I0108 21:19:53.012503  245190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:19:53.119932  245190 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:65 SystemTime:2023-01-08 21:19:53.033535491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:19:53.120032  245190 docker.go:254] overlay module found
	I0108 21:19:53.122374  245190 out.go:177] * Using the docker driver based on user configuration
	I0108 21:19:53.123929  245190 start.go:294] selected driver: docker
	I0108 21:19:53.123946  245190 start.go:838] validating driver "docker" against <nil>
	I0108 21:19:53.123968  245190 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:19:53.124813  245190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:19:53.231042  245190 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:65 SystemTime:2023-01-08 21:19:53.147923655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:19:53.231205  245190 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 21:19:53.231428  245190 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:19:53.233705  245190 out.go:177] * Using Docker driver with root privileges
	I0108 21:19:53.235249  245190 cni.go:95] Creating CNI manager for ""
	I0108 21:19:53.235268  245190 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:19:53.235286  245190 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:19:53.235302  245190 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:19:53.237832  245190 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:19:53.239532  245190 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:19:53.241158  245190 out.go:177] * Pulling base image ...
	I0108 21:19:53.242716  245190 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:19:53.242756  245190 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:19:53.242772  245190 cache.go:57] Caching tarball of preloaded images
	I0108 21:19:53.242831  245190 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:19:53.243032  245190 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:19:53.243050  245190 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:19:53.243246  245190 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:19:53.243276  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json: {Name:mkb97849e15179c2af5353f1e32a3aa2aa2f131a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:19:53.275088  245190 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:19:53.275119  245190 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:19:53.275150  245190 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:19:53.275202  245190 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:19:53.275355  245190 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 126.549µs
	I0108 21:19:53.275389  245190 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:19:53.275545  245190 start.go:125] createHost starting for "" (driver="docker")
	I0108 21:19:53.279650  245190 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 21:19:53.279947  245190 start.go:159] libmachine.API.Create for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:19:53.279988  245190 client.go:168] LocalClient.Create starting
	I0108 21:19:53.280074  245190 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem
	I0108 21:19:53.280111  245190 main.go:134] libmachine: Decoding PEM data...
	I0108 21:19:53.280146  245190 main.go:134] libmachine: Parsing certificate...
	I0108 21:19:53.280225  245190 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem
	I0108 21:19:53.280257  245190 main.go:134] libmachine: Decoding PEM data...
	I0108 21:19:53.280275  245190 main.go:134] libmachine: Parsing certificate...
	I0108 21:19:53.280687  245190 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 21:19:53.304327  245190 cli_runner.go:211] docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 21:19:53.304408  245190 network_create.go:272] running [docker network inspect default-k8s-diff-port-211952] to gather additional debugging logs...
	I0108 21:19:53.304436  245190 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952
	W0108 21:19:53.335116  245190 cli_runner.go:211] docker network inspect default-k8s-diff-port-211952 returned with exit code 1
	I0108 21:19:53.335155  245190 network_create.go:275] error running [docker network inspect default-k8s-diff-port-211952]: docker network inspect default-k8s-diff-port-211952: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-211952
	I0108 21:19:53.335175  245190 network_create.go:277] output of [docker network inspect default-k8s-diff-port-211952]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-211952
	
	** /stderr **
	I0108 21:19:53.335231  245190 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:19:53.366179  245190 network.go:244] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b55bc2878bca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d4:2d:1f:91}}
	I0108 21:19:53.367582  245190 network.go:244] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6ab3f57c56bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:58:4f:a6:4e}}
	I0108 21:19:53.369075  245190 network.go:306] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000644470] misses:0}
	I0108 21:19:53.369117  245190 network.go:239] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 21:19:53.369134  245190 network_create.go:115] attempt to create docker network default-k8s-diff-port-211952 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0108 21:19:53.369203  245190 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-211952 default-k8s-diff-port-211952
	I0108 21:19:53.432280  245190 network_create.go:99] docker network default-k8s-diff-port-211952 192.168.67.0/24 created
	I0108 21:19:53.432310  245190 kic.go:106] calculated static IP "192.168.67.2" for the "default-k8s-diff-port-211952" container
	I0108 21:19:53.432359  245190 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 21:19:53.464738  245190 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-211952 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-211952 --label created_by.minikube.sigs.k8s.io=true
	I0108 21:19:53.493058  245190 oci.go:103] Successfully created a docker volume default-k8s-diff-port-211952
	I0108 21:19:53.493156  245190 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-211952-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-211952 --entrypoint /usr/bin/test -v default-k8s-diff-port-211952:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 21:19:57.071397  245190 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-211952-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-211952 --entrypoint /usr/bin/test -v default-k8s-diff-port-211952:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib: (3.578190403s)
	I0108 21:19:57.071429  245190 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-211952
	I0108 21:19:57.071495  245190 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:19:57.071521  245190 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 21:19:57.071567  245190 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-211952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 21:20:01.016410  245190 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-211952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (3.944783498s)
	I0108 21:20:01.016454  245190 kic.go:188] duration metric: took 3.944929 seconds to extract preloaded images to volume
	W0108 21:20:01.016581  245190 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 21:20:01.016658  245190 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 21:20:01.126784  245190 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-211952 --name default-k8s-diff-port-211952 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-211952 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-211952 --network default-k8s-diff-port-211952 --ip 192.168.67.2 --volume default-k8s-diff-port-211952:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 21:20:01.553108  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Running}}
	I0108 21:20:01.585097  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:20:01.617042  245190 cli_runner.go:164] Run: docker exec default-k8s-diff-port-211952 stat /var/lib/dpkg/alternatives/iptables
	I0108 21:20:01.692230  245190 oci.go:144] the created container "default-k8s-diff-port-211952" has a running status.
	I0108 21:20:01.692265  245190 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa...
	I0108 21:20:01.983315  245190 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 21:20:02.067243  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:20:02.099988  245190 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 21:20:02.100016  245190 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-211952 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 21:20:02.174751  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:20:02.205076  245190 machine.go:88] provisioning docker machine ...
	I0108 21:20:02.205114  245190 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:20:02.205170  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:02.234621  245190 main.go:134] libmachine: Using SSH client type: native
	I0108 21:20:02.234790  245190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33027 <nil> <nil>}
	I0108 21:20:02.234806  245190 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:20:02.360635  245190 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:20:02.360716  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:02.388036  245190 main.go:134] libmachine: Using SSH client type: native
	I0108 21:20:02.388180  245190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33027 <nil> <nil>}
	I0108 21:20:02.388199  245190 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:20:02.503251  245190 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:20:02.503285  245190 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:20:02.503313  245190 ubuntu.go:177] setting up certificates
	I0108 21:20:02.503327  245190 provision.go:83] configureAuth start
	I0108 21:20:02.503390  245190 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:20:02.527322  245190 provision.go:138] copyHostCerts
	I0108 21:20:02.527382  245190 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:20:02.527395  245190 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:20:02.527521  245190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:20:02.527622  245190 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:20:02.527634  245190 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:20:02.527670  245190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:20:02.527724  245190 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:20:02.527732  245190 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:20:02.527755  245190 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:20:02.527797  245190 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:20:02.932279  245190 provision.go:172] copyRemoteCerts
	I0108 21:20:02.932331  245190 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:20:02.932363  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:02.959769  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:03.047341  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:20:03.067427  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:20:03.085876  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:20:03.104585  245190 provision.go:86] duration metric: configureAuth took 601.24093ms
	I0108 21:20:03.104617  245190 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:20:03.104924  245190 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:20:03.104946  245190 machine.go:91] provisioned docker machine in 899.847881ms
	I0108 21:20:03.104954  245190 client.go:171] LocalClient.Create took 9.824957158s
	I0108 21:20:03.104973  245190 start.go:167] duration metric: libmachine.API.Create for "default-k8s-diff-port-211952" took 9.825027141s
	I0108 21:20:03.104982  245190 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:20:03.104992  245190 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:20:03.105045  245190 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:20:03.105087  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:03.139566  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:03.227401  245190 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:20:03.230154  245190 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:20:03.230176  245190 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:20:03.230186  245190 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:20:03.230192  245190 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:20:03.230200  245190 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:20:03.230247  245190 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:20:03.230325  245190 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:20:03.230399  245190 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:20:03.237951  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:20:03.255566  245190 start.go:303] post-start completed in 150.572691ms
	I0108 21:20:03.255916  245190 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:20:03.281020  245190 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:20:03.281340  245190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:20:03.281395  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:03.309257  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:03.392173  245190 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:20:03.396208  245190 start.go:128] duration metric: createHost completed in 10.120646936s
	I0108 21:20:03.396235  245190 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 10.120862s
	I0108 21:20:03.396336  245190 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:20:03.421841  245190 ssh_runner.go:195] Run: cat /version.json
	I0108 21:20:03.421901  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:03.421930  245190 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:20:03.421993  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:03.450184  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:03.455622  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:03.530785  245190 ssh_runner.go:195] Run: systemctl --version
	I0108 21:20:03.564220  245190 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:20:03.574372  245190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:20:03.583562  245190 docker.go:189] disabling docker service ...
	I0108 21:20:03.583612  245190 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:20:03.601769  245190 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:20:03.611814  245190 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:20:03.693036  245190 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:20:03.768510  245190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:20:03.777753  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:20:03.790557  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:20:03.798420  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:20:03.806302  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:20:03.814617  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:20:03.822393  245190 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:20:03.829113  245190 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:20:03.835646  245190 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:20:03.927309  245190 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:20:04.008115  245190 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:20:04.008226  245190 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:20:04.011791  245190 start.go:472] Will wait 60s for crictl version
	I0108 21:20:04.011839  245190 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:20:04.039057  245190 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:20:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:20:15.086950  245190 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:20:15.111964  245190 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:20:15.112027  245190 ssh_runner.go:195] Run: containerd --version
	I0108 21:20:15.135696  245190 ssh_runner.go:195] Run: containerd --version
	I0108 21:20:15.160604  245190 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:20:15.162071  245190 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:20:15.184681  245190 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:20:15.187873  245190 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:20:15.196974  245190 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:20:15.197028  245190 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:20:15.220214  245190 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:20:15.220239  245190 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:20:15.220299  245190 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:20:15.243447  245190 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:20:15.243502  245190 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:20:15.243550  245190 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:20:15.266460  245190 cni.go:95] Creating CNI manager for ""
	I0108 21:20:15.266491  245190 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:20:15.266503  245190 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:20:15.266522  245190 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:20:15.266688  245190 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:20:15.266801  245190 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:20:15.266853  245190 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:20:15.273880  245190 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:20:15.273953  245190 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:20:15.281298  245190 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:20:15.294472  245190 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:20:15.307482  245190 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:20:15.322264  245190 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:20:15.325267  245190 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:20:15.335709  245190 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:20:15.335824  245190 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:20:15.335881  245190 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:20:15.335939  245190 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:20:15.335956  245190 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.crt with IP's: []
	I0108 21:20:15.676800  245190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.crt ...
	I0108 21:20:15.676833  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.crt: {Name:mk76d40c42e91ccfb23f017d10ac56a172c0bf7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:15.677034  245190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key ...
	I0108 21:20:15.677045  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key: {Name:mkeef7863c54f83020e9a47844fec90c6c201095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:15.677133  245190 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:20:15.677149  245190 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:20:15.857312  245190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt.c7fa3a9e ...
	I0108 21:20:15.857342  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt.c7fa3a9e: {Name:mkc41f3bb20448a61a38f97a14ff4b9358c4870c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:15.857575  245190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e ...
	I0108 21:20:15.857593  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e: {Name:mk3370edc1d5cc837dde0e3224a5bc9df8bed178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:15.857723  245190 certs.go:320] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt
	I0108 21:20:15.857819  245190 certs.go:324] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key
	I0108 21:20:15.857910  245190 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:20:15.857935  245190 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt with IP's: []
	I0108 21:20:15.950323  245190 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt ...
	I0108 21:20:15.950351  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt: {Name:mk9f443487251de4e07ef8a044e1bf712734a7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:15.950578  245190 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key ...
	I0108 21:20:15.950591  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key: {Name:mk70b6446026de182b064e60a1f9ad7066639596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:15.950768  245190 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:20:15.950806  245190 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:20:15.950818  245190 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:20:15.950844  245190 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:20:15.950869  245190 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:20:15.950894  245190 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:20:15.950947  245190 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:20:15.951459  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:20:15.971878  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:20:15.989109  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:20:16.006225  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:20:16.023583  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:20:16.041317  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:20:16.058888  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:20:16.077110  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:20:16.095032  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:20:16.113552  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:20:16.131771  245190 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:20:16.149934  245190 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:20:16.162585  245190 ssh_runner.go:195] Run: openssl version
	I0108 21:20:16.167240  245190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:20:16.174414  245190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:20:16.177439  245190 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:20:16.177489  245190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:20:16.182210  245190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:20:16.189320  245190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:20:16.196390  245190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:20:16.199327  245190 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:20:16.199409  245190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:20:16.204031  245190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:20:16.210935  245190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:20:16.218068  245190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:20:16.221144  245190 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:20:16.221198  245190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:20:16.225966  245190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:20:16.233203  245190 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:20:16.233310  245190 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:20:16.233345  245190 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:20:16.257083  245190 cri.go:87] found id: ""
	I0108 21:20:16.257145  245190 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:20:16.264240  245190 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:20:16.270886  245190 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:20:16.270934  245190 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:20:16.277779  245190 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:20:16.277823  245190 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:20:16.326365  245190 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:20:16.326436  245190 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:20:16.356793  245190 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:20:16.356883  245190 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:20:16.356949  245190 kubeadm.go:317] OS: Linux
	I0108 21:20:16.357023  245190 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:20:16.357094  245190 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:20:16.357156  245190 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:20:16.357226  245190 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:20:16.357272  245190 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:20:16.357325  245190 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:20:16.357403  245190 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:20:16.357477  245190 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:20:16.357532  245190 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:20:16.422528  245190 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:20:16.422659  245190 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:20:16.422763  245190 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:20:16.540391  245190 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:20:16.542981  245190 out.go:204]   - Generating certificates and keys ...
	I0108 21:20:16.543111  245190 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:20:16.543219  245190 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:20:16.711327  245190 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:20:16.780507  245190 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:20:16.909907  245190 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:20:17.189019  245190 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 21:20:17.324010  245190 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 21:20:17.324178  245190 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-211952 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 21:20:17.426124  245190 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 21:20:17.426244  245190 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-211952 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 21:20:17.642934  245190 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:20:17.790066  245190 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:20:17.970283  245190 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 21:20:17.970417  245190 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:20:18.110610  245190 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:20:18.208039  245190 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:20:18.466578  245190 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:20:18.574036  245190 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:20:18.586598  245190 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:20:18.587599  245190 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:20:18.587659  245190 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:20:18.676352  245190 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:20:18.679038  245190 out.go:204]   - Booting up control plane ...
	I0108 21:20:18.679243  245190 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:20:18.681380  245190 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:20:18.682542  245190 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:20:18.683502  245190 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:20:18.685792  245190 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:20:24.689260  245190 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.003377 seconds
	I0108 21:20:24.689400  245190 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:20:24.697697  245190 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:20:25.214225  245190 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:20:25.214468  245190 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:20:25.723385  245190 kubeadm.go:317] [bootstrap-token] Using token: fthpds.4cvrskuk2fbt44yi
	I0108 21:20:25.725167  245190 out.go:204]   - Configuring RBAC rules ...
	I0108 21:20:25.725311  245190 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:20:25.728014  245190 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:20:25.733986  245190 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:20:25.736383  245190 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:20:25.739014  245190 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:20:25.741838  245190 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:20:25.750305  245190 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:20:26.000418  245190 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:20:26.132957  245190 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:20:26.134604  245190 kubeadm.go:317] 
	I0108 21:20:26.134685  245190 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:20:26.134697  245190 kubeadm.go:317] 
	I0108 21:20:26.134775  245190 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:20:26.134785  245190 kubeadm.go:317] 
	I0108 21:20:26.134811  245190 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:20:26.134875  245190 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:20:26.134929  245190 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:20:26.134939  245190 kubeadm.go:317] 
	I0108 21:20:26.134994  245190 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:20:26.135004  245190 kubeadm.go:317] 
	I0108 21:20:26.135053  245190 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:20:26.135065  245190 kubeadm.go:317] 
	I0108 21:20:26.135118  245190 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:20:26.135203  245190 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:20:26.135276  245190 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:20:26.135290  245190 kubeadm.go:317] 
	I0108 21:20:26.135375  245190 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:20:26.135457  245190 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:20:26.135467  245190 kubeadm.go:317] 
	I0108 21:20:26.135565  245190 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token fthpds.4cvrskuk2fbt44yi \
	I0108 21:20:26.135687  245190 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:20:26.135716  245190 kubeadm.go:317] 	--control-plane 
	I0108 21:20:26.135726  245190 kubeadm.go:317] 
	I0108 21:20:26.135823  245190 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:20:26.135834  245190 kubeadm.go:317] 
	I0108 21:20:26.135929  245190 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token fthpds.4cvrskuk2fbt44yi \
	I0108 21:20:26.136051  245190 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:20:26.138683  245190 kubeadm.go:317] W0108 21:20:16.315673     751 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:20:26.138918  245190 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:20:26.139034  245190 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:20:26.139066  245190 cni.go:95] Creating CNI manager for ""
	I0108 21:20:26.139075  245190 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:20:26.142607  245190 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:20:26.144255  245190 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:20:26.150401  245190 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:20:26.150429  245190 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:20:26.166804  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:20:27.096883  245190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:20:27.096998  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:27.097005  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_20_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:27.111871  245190 ops.go:34] apiserver oom_adj: -16
	I0108 21:20:27.168023  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:27.769372  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:28.269672  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:28.769414  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:29.269613  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:29.768986  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:30.269334  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:30.769424  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:31.268914  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:31.769324  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:32.269467  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:32.769687  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:33.269755  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:33.769650  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:34.269775  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:34.769440  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:35.268820  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:35.768888  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:36.268749  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:36.769521  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:37.269263  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:37.769740  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:38.269661  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:38.769720  245190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:20:38.837928  245190 kubeadm.go:1067] duration metric: took 11.740984422s to wait for elevateKubeSystemPrivileges.
	I0108 21:20:38.837958  245190 kubeadm.go:398] StartCluster complete in 22.604762974s
	I0108 21:20:38.837974  245190 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:20:38.838085  245190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:20:38.840835  245190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0108 21:20:38.858584  245190 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0108 21:20:39.861198  245190 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:20:39.861256  245190 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:20:39.861271  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:20:39.863578  245190 out.go:177] * Verifying Kubernetes components...
	I0108 21:20:39.861358  245190 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 21:20:39.861509  245190 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:20:39.865275  245190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:20:39.865287  245190 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:20:39.865305  245190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:20:39.865278  245190 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:20:39.865384  245190 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:20:39.865398  245190 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:20:39.865475  245190 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:20:39.865715  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:20:39.865962  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:20:39.900961  245190 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	W0108 21:20:39.900990  245190 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:20:39.901017  245190 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:20:39.903520  245190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:20:39.901400  245190 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:20:39.905194  245190 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:20:39.905214  245190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:20:39.905284  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:39.936026  245190 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:20:39.936052  245190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:20:39.936123  245190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:20:39.936943  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:39.940047  245190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:20:39.941509  245190 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:20:39.968053  245190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33027 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:20:40.041364  245190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:20:40.124861  245190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:20:40.317968  245190 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:20:40.527623  245190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:20:40.528971  245190 addons.go:488] enableAddons completed in 667.616876ms
	I0108 21:20:41.947626  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:43.948643  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:46.447743  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:48.948111  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:51.447815  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:53.448462  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:55.448501  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:57.448646  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:20:59.948361  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:01.948567  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:04.447959  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:06.449102  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:08.948177  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:10.948447  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:13.448462  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:15.448772  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:17.948651  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:20.448682  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:22.948378  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:25.448237  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:27.948001  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:29.948452  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.948574  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:34.448121  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:36.947638  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:39.448188  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:41.448816  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:43.947639  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.948111  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:47.948466  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:50.447460  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:52.448611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:54.947665  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:56.947700  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:58.949756  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:01.448451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:03.947604  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.948211  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:08.447451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.448497  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:12.948218  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:15.449977  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:17.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:19.947707  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:21.948479  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:24.447621  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:26.448004  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:28.947732  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:31.448269  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:33.948439  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:36.448349  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:38.948577  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:41.447813  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:43.448406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:45.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:47.948164  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:50.448245  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:52.948450  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:55.447735  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:57.448306  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:59.448595  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:01.448628  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:03.948469  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:06.448911  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:08.947910  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:11.447912  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:13.448469  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:15.947494  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:17.948406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:20.447825  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:22.448183  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:24.448405  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:26.948163  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:29.447764  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:31.448133  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:33.448453  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:35.947611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:37.947842  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:40.447666  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:42.447882  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:44.448094  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:46.448207  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:48.948453  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:51.448318  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:53.448748  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:55.948302  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:57.948356  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:00.447631  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:02.448258  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:04.947683  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:06.948594  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:09.448299  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:11.448411  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:13.948419  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:16.447752  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:18.448212  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:20.948454  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:23.448057  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:25.948520  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:28.448303  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:30.948633  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:33.448497  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:35.448648  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:37.448697  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:39.948153  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:39.950285  245190 node_ready.go:38] duration metric: took 4m0.008748338s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:24:39.952493  245190 out.go:177] 
	W0108 21:24:39.954000  245190 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:24:39.954018  245190 out.go:239] * 
	* 
	W0108 21:24:39.954886  245190 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:24:39.956750  245190 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-211952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-211952
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-211952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a",
	        "Created": "2023-01-08T21:20:01.150415833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:20:01.544064591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hostname",
	        "HostsPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hosts",
	        "LogPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a-json.log",
	        "Name": "/default-k8s-diff-port-211952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-211952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-211952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-211952",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-211952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-211952",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2bf33dbb62611d9560108b1c0a529546771fed3ac5d99ff62eef897f847b173",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2bf33dbb626",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-211952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "553ec1d733bb",
	                        "default-k8s-diff-port-211952"
	                    ],
	                    "NetworkID": "dac77270e17703c586bb819b54d2f7262cc084b9a2efd9432712b1970a60294f",
	                    "EndpointID": "3d04b80dad9440ee7c222e1d09648b2670e8c59dcc8578b2d8550cd138b1734d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-211952 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-210943                                   | pause-210943                 | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p cilium-210619 --memory=2048                    | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:12 UTC |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-210725                         | cert-expiration-210725       | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC | 08 Jan 23 21:11 UTC |
	| start   | -p calico-210619 --memory=2048                    | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:11 UTC |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-210619 pgrep -a                        | kindnet-210619               | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| delete  | -p kindnet-210619                                 | kindnet-210619               | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | --memory=2048                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p cilium-210619 pgrep -a                         | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	|         | pgrep -a kubelet                                  |                              |         |         |                     |                     |
	| delete  | -p cilium-210619                                  | cilium-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:12 UTC |
	| start   | -p bridge-210619 --memory=2048                    | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:12 UTC | 08 Jan 23 21:13 UTC |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                              |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| ssh     | -p bridge-210619 pgrep -a                         | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:13 UTC | 08 Jan 23 21:13 UTC |
	|         | kubelet                                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-210902                      | kubernetes-upgrade-210902    | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC | 08 Jan 23 21:18 UTC |
	| start   | -p old-k8s-version-211828                         | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-210619                      | enable-default-cni-210619    | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC | 08 Jan 23 21:18 UTC |
	| start   | -p no-preload-211859                              | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| delete  | -p bridge-210619                                  | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| delete  | -p calico-210619                                  | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| start   | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:20 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950       | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950            | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                             | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:21:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:21:05.802454  252113 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:21:05.802654  252113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:21:05.802662  252113 out.go:309] Setting ErrFile to fd 2...
	I0108 21:21:05.802669  252113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:21:05.802789  252113 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:21:05.803305  252113 out.go:303] Setting JSON to false
	I0108 21:21:05.804864  252113 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3815,"bootTime":1673209051,"procs":557,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:21:05.804927  252113 start.go:135] virtualization: kvm guest
	I0108 21:21:05.807547  252113 out.go:177] * [embed-certs-211950] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:21:05.809328  252113 notify.go:220] Checking for updates...
	I0108 21:21:05.809354  252113 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:21:05.811105  252113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:21:05.812689  252113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:21:05.814326  252113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:21:05.815772  252113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:21:05.817523  252113 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:21:05.817884  252113 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:21:05.850227  252113 docker.go:137] docker version: linux-20.10.22
	I0108 21:21:05.850311  252113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:21:05.950357  252113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:21:05.870996193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:21:05.950457  252113 docker.go:254] overlay module found
	I0108 21:21:05.952625  252113 out.go:177] * Using the docker driver based on existing profile
	I0108 21:21:05.953952  252113 start.go:294] selected driver: docker
	I0108 21:21:05.953965  252113 start.go:838] validating driver "docker" against &{Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:05.954060  252113 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:21:05.954880  252113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:21:06.055295  252113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2023-01-08 21:21:05.976172276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:21:06.055595  252113 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:21:06.055620  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:06.055628  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:06.055645  252113 start_flags.go:317] config:
	{Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:06.057844  252113 out.go:177] * Starting control plane node embed-certs-211950 in cluster embed-certs-211950
	I0108 21:21:06.059242  252113 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:21:06.060605  252113 out.go:177] * Pulling base image ...
	I0108 21:21:06.061894  252113 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:21:06.061922  252113 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:21:06.061940  252113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:21:06.061952  252113 cache.go:57] Caching tarball of preloaded images
	I0108 21:21:06.062182  252113 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:21:06.062204  252113 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:21:06.062345  252113 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/config.json ...
	I0108 21:21:06.088100  252113 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:21:06.088123  252113 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:21:06.088154  252113 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:21:06.088191  252113 start.go:364] acquiring machines lock for embed-certs-211950: {Name:mk0bdd56e7ab57c1368c3e82ee515d1652a3526b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:21:06.088291  252113 start.go:368] acquired machines lock for "embed-certs-211950" in 77.123µs
	I0108 21:21:06.088316  252113 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:21:06.088321  252113 fix.go:55] fixHost starting: 
	I0108 21:21:06.088519  252113 cli_runner.go:164] Run: docker container inspect embed-certs-211950 --format={{.State.Status}}
	I0108 21:21:06.113908  252113 fix.go:103] recreateIfNeeded on embed-certs-211950: state=Stopped err=<nil>
	W0108 21:21:06.113938  252113 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:21:06.116212  252113 out.go:177] * Restarting existing docker container for "embed-certs-211950" ...
	I0108 21:21:04.447959  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:06.449102  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:05.102742  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:07.602250  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:06.312183  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:08.810045  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:06.117745  252113 cli_runner.go:164] Run: docker start embed-certs-211950
	I0108 21:21:06.475305  252113 cli_runner.go:164] Run: docker container inspect embed-certs-211950 --format={{.State.Status}}
	I0108 21:21:06.503725  252113 kic.go:415] container "embed-certs-211950" state is running.
	I0108 21:21:06.504129  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:06.530036  252113 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/config.json ...
	I0108 21:21:06.530275  252113 machine.go:88] provisioning docker machine ...
	I0108 21:21:06.530298  252113 ubuntu.go:169] provisioning hostname "embed-certs-211950"
	I0108 21:21:06.530340  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:06.559072  252113 main.go:134] libmachine: Using SSH client type: native
	I0108 21:21:06.559258  252113 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33032 <nil> <nil>}
	I0108 21:21:06.559273  252113 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-211950 && echo "embed-certs-211950" | sudo tee /etc/hostname
	I0108 21:21:06.559914  252113 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56878->127.0.0.1:33032: read: connection reset by peer
	I0108 21:21:09.684380  252113 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-211950
	
	I0108 21:21:09.684467  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:09.708681  252113 main.go:134] libmachine: Using SSH client type: native
	I0108 21:21:09.708844  252113 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33032 <nil> <nil>}
	I0108 21:21:09.708871  252113 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-211950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-211950/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-211950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:21:09.827124  252113 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:21:09.827161  252113 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:21:09.827192  252113 ubuntu.go:177] setting up certificates
	I0108 21:21:09.827204  252113 provision.go:83] configureAuth start
	I0108 21:21:09.827263  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:09.852817  252113 provision.go:138] copyHostCerts
	I0108 21:21:09.852880  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:21:09.852893  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:21:09.852963  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:21:09.853060  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:21:09.853069  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:21:09.853093  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:21:09.853148  252113 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:21:09.853158  252113 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:21:09.853182  252113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:21:09.853235  252113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.embed-certs-211950 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-211950]
	I0108 21:21:09.920653  252113 provision.go:172] copyRemoteCerts
	I0108 21:21:09.920714  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:21:09.920750  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:09.947707  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.030903  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:21:10.048184  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:21:10.065587  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:21:10.083096  252113 provision.go:86] duration metric: configureAuth took 255.875528ms
	I0108 21:21:10.083135  252113 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:21:10.083333  252113 config.go:180] Loaded profile config "embed-certs-211950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:21:10.083347  252113 machine.go:91] provisioned docker machine in 3.553058016s
	I0108 21:21:10.083354  252113 start.go:300] post-start starting for "embed-certs-211950" (driver="docker")
	I0108 21:21:10.083362  252113 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:21:10.083415  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:21:10.083452  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.109702  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.195016  252113 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:21:10.197818  252113 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:21:10.197840  252113 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:21:10.197851  252113 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:21:10.197857  252113 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:21:10.197865  252113 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:21:10.197912  252113 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:21:10.197977  252113 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:21:10.198052  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:21:10.204746  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:21:10.222465  252113 start.go:303] post-start completed in 139.096583ms
	I0108 21:21:10.222528  252113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:21:10.222583  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.248489  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.332052  252113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:21:10.335995  252113 fix.go:57] fixHost completed within 4.247669326s
	I0108 21:21:10.336018  252113 start.go:83] releasing machines lock for "embed-certs-211950", held for 4.247709743s
	I0108 21:21:10.336091  252113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-211950
	I0108 21:21:10.362577  252113 ssh_runner.go:195] Run: cat /version.json
	I0108 21:21:10.362643  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.362655  252113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:21:10.362722  252113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-211950
	I0108 21:21:10.389523  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.390135  252113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33032 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/embed-certs-211950/id_rsa Username:docker}
	I0108 21:21:10.474910  252113 ssh_runner.go:195] Run: systemctl --version
	I0108 21:21:10.503672  252113 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:21:10.515405  252113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:21:10.525294  252113 docker.go:189] disabling docker service ...
	I0108 21:21:10.525338  252113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:21:10.535021  252113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:21:10.543823  252113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:21:10.626580  252113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:21:10.703580  252113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:21:10.712815  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:21:10.725307  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:21:10.733204  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:21:10.742003  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:21:10.749989  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:21:10.757996  252113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:21:10.764350  252113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:21:10.770752  252113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:21:08.948177  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:10.948447  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:09.602390  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:12.102789  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:10.810085  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:13.309701  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:10.843690  252113 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:21:10.910489  252113 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:21:10.910563  252113 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:21:10.914322  252113 start.go:472] Will wait 60s for crictl version
	I0108 21:21:10.914382  252113 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:21:10.939459  252113 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:21:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:21:13.448462  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:15.448772  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:17.948651  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:14.602345  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:17.102934  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:15.810341  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:17.811099  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:21.986836  252113 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:21:22.009302  252113 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:21:22.009370  252113 ssh_runner.go:195] Run: containerd --version
	I0108 21:21:22.032318  252113 ssh_runner.go:195] Run: containerd --version
	I0108 21:21:22.057723  252113 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:21:20.448682  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:22.948378  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:19.602327  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:21.602862  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:20.309592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.309823  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:22.059129  252113 cli_runner.go:164] Run: docker network inspect embed-certs-211950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:21:22.082721  252113 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0108 21:21:22.086120  252113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:21:22.095607  252113 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:21:22.095676  252113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:21:22.119292  252113 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:21:22.119314  252113 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:21:22.119353  252113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:21:22.144549  252113 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:21:22.144574  252113 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:21:22.144617  252113 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:21:22.169507  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:22.169531  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:22.169546  252113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:21:22.169563  252113 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-211950 NodeName:embed-certs-211950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:21:22.169743  252113 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-211950"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:21:22.169858  252113 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-211950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:21:22.169918  252113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:21:22.177488  252113 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:21:22.177552  252113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:21:22.184516  252113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
	I0108 21:21:22.197565  252113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:21:22.210079  252113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0108 21:21:22.222327  252113 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:21:22.225285  252113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:21:22.234190  252113 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950 for IP: 192.168.94.2
	I0108 21:21:22.234285  252113 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:21:22.234322  252113 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:21:22.234389  252113 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/client.key
	I0108 21:21:22.234443  252113 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.key.ad8e880a
	I0108 21:21:22.234517  252113 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.key
	I0108 21:21:22.234619  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:21:22.234647  252113 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:21:22.234656  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:21:22.234690  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:21:22.234715  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:21:22.234739  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:21:22.234776  252113 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:21:22.235406  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:21:22.252804  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:21:22.269489  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:21:22.286176  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/embed-certs-211950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:21:22.302881  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:21:22.319924  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:21:22.336527  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:21:22.353096  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:21:22.369684  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:21:22.386382  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:21:22.403589  252113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:21:22.422540  252113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:21:22.434954  252113 ssh_runner.go:195] Run: openssl version
	I0108 21:21:22.439875  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:21:22.447293  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.450515  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.450562  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:21:22.455232  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:21:22.461900  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:21:22.469022  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.471993  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.472043  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:21:22.476628  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:21:22.483089  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:21:22.490167  252113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.493388  252113 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.493425  252113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:21:22.498191  252113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:21:22.505075  252113 kubeadm.go:396] StartCluster: {Name:embed-certs-211950 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-211950 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:21:22.505169  252113 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:21:22.505219  252113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:21:22.530247  252113 cri.go:87] found id: "89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3"
	I0108 21:21:22.530269  252113 cri.go:87] found id: "d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804"
	I0108 21:21:22.530276  252113 cri.go:87] found id: "8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1"
	I0108 21:21:22.530282  252113 cri.go:87] found id: "deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557"
	I0108 21:21:22.530288  252113 cri.go:87] found id: "96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f"
	I0108 21:21:22.530294  252113 cri.go:87] found id: "0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b"
	I0108 21:21:22.530300  252113 cri.go:87] found id: "661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64"
	I0108 21:21:22.530305  252113 cri.go:87] found id: "a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a"
	I0108 21:21:22.530311  252113 cri.go:87] found id: ""
	I0108 21:21:22.530349  252113 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:21:22.542527  252113 cri.go:114] JSON = null
	W0108 21:21:22.542587  252113 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0108 21:21:22.542631  252113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:21:22.550243  252113 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:21:22.550264  252113 kubeadm.go:627] restartCluster start
	I0108 21:21:22.550299  252113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:21:22.557319  252113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.558314  252113 kubeconfig.go:135] verify returned: extract IP: "embed-certs-211950" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:21:22.558783  252113 kubeconfig.go:146] "embed-certs-211950" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:21:22.559413  252113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:21:22.560901  252113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:21:22.567580  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.567625  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.575328  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.775525  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.775607  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.784331  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:22.975569  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:22.975661  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:22.984395  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.175545  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.175618  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.184269  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.375514  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.375606  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.384151  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.576476  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.576564  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.585154  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.776477  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.776559  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.785115  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:23.976398  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:23.976477  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:23.985629  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.175955  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.176027  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.185012  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.376357  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.376419  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.385370  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.575561  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.575652  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.584295  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.775523  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.775587  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.783989  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:24.976277  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:24.976357  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:24.984953  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.176244  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.176331  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.184911  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.376222  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.376301  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.385465  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.575785  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.575879  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.584484  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.584506  252113 api_server.go:165] Checking apiserver status ...
	I0108 21:21:25.584548  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:21:25.592781  252113 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.592805  252113 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:21:25.592811  252113 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:21:25.592822  252113 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:21:25.592860  252113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:21:25.618121  252113 cri.go:87] found id: "89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3"
	I0108 21:21:25.618143  252113 cri.go:87] found id: "d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804"
	I0108 21:21:25.618150  252113 cri.go:87] found id: "8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1"
	I0108 21:21:25.618156  252113 cri.go:87] found id: "deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557"
	I0108 21:21:25.618162  252113 cri.go:87] found id: "96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f"
	I0108 21:21:25.618168  252113 cri.go:87] found id: "0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b"
	I0108 21:21:25.618174  252113 cri.go:87] found id: "661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64"
	I0108 21:21:25.618180  252113 cri.go:87] found id: "a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a"
	I0108 21:21:25.618186  252113 cri.go:87] found id: ""
	I0108 21:21:25.618194  252113 cri.go:232] Stopping containers: [89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3 d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804 8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1 deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557 96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f 0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b 661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64 a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a]
	I0108 21:21:25.618232  252113 ssh_runner.go:195] Run: which crictl
	I0108 21:21:25.621048  252113 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 89a8de6f521f8243c799cb716667457963c6e97b5ba6b48214976b5969e46eb3 d147c154d2b1bba1e7914547754b114d509b8f036c6ab17cc46cd16f2bb67804 8c4edc81cee83db5f851592ab6e35f35d1a3dcbc676e2621c025ccd2e6d361f1 deadf4ad2cb0b9ea2eccba64bfd495d97f73bb63315a5039a69dfa9bd91b9557 96646d39dfe73748d7e64070179768bcb8d8dfeb8292891cf42b4e0f8e39ac8f 0f13fbba981df0d0b39c780e1ad6e510287e450ece0fdc730f960c6dba03815b 661098908290ee1fa99389ecb59d2a0cd00cb6464e959f754d124c4173502b64 a7959ec8c708edd93b01c594af64ca292a111e1d721e2c8446d6efc28bba653a
	I0108 21:21:25.647817  252113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:21:25.657541  252113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:21:25.664561  252113 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:21:25.664619  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:21:25.671011  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:21:25.677375  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:21:25.683797  252113 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.683846  252113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:21:25.689922  252113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:21:25.696159  252113 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:21:25.696204  252113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:21:25.702527  252113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:21:25.708916  252113 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:21:25.708938  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:25.752274  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:25.448237  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:27.948001  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:24.102717  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:26.602573  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:24.809637  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.810185  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:28.810565  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:26.771186  252113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018881983s)
	I0108 21:21:26.771221  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:26.910605  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:26.962648  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:27.049416  252113 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:21:27.049533  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:27.614351  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:28.113890  252113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:21:28.125589  252113 api_server.go:71] duration metric: took 1.076175741s to wait for apiserver process to appear ...
	I0108 21:21:28.125678  252113 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:21:28.125706  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:28.126079  252113 api_server.go:268] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0108 21:21:28.626473  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:29.948452  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.948574  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:31.619403  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0108 21:21:31.619437  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0108 21:21:31.626775  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:31.712269  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:31.712321  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:32.126802  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:32.131550  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:32.131592  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:32.627202  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:32.632820  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:21:32.632854  252113 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:21:33.126355  252113 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:21:33.132259  252113 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:21:33.140648  252113 api_server.go:140] control plane version: v1.25.3
	I0108 21:21:33.140683  252113 api_server.go:130] duration metric: took 5.014986172s to wait for apiserver health ...
	I0108 21:21:33.140697  252113 cni.go:95] Creating CNI manager for ""
	I0108 21:21:33.140707  252113 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:21:33.143250  252113 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:21:29.102196  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.102881  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:31.310002  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.809947  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:33.145039  252113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:21:33.149495  252113 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:21:33.149517  252113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:21:33.165823  252113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:21:34.423055  252113 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.257190006s)
	I0108 21:21:34.423131  252113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:21:34.431806  252113 system_pods.go:59] 9 kube-system pods found
	I0108 21:21:34.431843  252113 system_pods.go:61] "coredns-565d847f94-phg9v" [2a976fdd-21b3-4dee-a33c-ccd2c57d8be9] Running
	I0108 21:21:34.431856  252113 system_pods.go:61] "etcd-embed-certs-211950" [4971d596-11e2-4364-a509-52a06bf77e09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:21:34.431864  252113 system_pods.go:61] "kindnet-26wwc" [02f0fed5-e625-4740-aa5e-d77817ca124b] Running
	I0108 21:21:34.431884  252113 system_pods.go:61] "kube-apiserver-embed-certs-211950" [ba0d2dbe-2dbb-4a40-b2cc-da82f163d7f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:21:34.431900  252113 system_pods.go:61] "kube-controller-manager-embed-certs-211950" [0877b5ea-137d-4d80-a5d2-fd95544ba3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:21:34.431916  252113 system_pods.go:61] "kube-proxy-ggxgh" [1bd15143-26d2-4a26-a52e-362676c5397b] Running
	I0108 21:21:34.431928  252113 system_pods.go:61] "kube-scheduler-embed-certs-211950" [41b9dede-c0fb-4644-8fa3-51d3eccd950b] Running
	I0108 21:21:34.431942  252113 system_pods.go:61] "metrics-server-5c8fd5cf8-szzjr" [488ef49e-82e4-443b-8f03-3726c44719af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:21:34.431954  252113 system_pods.go:61] "storage-provisioner" [024b335d-c262-457c-8773-924e20b66407] Running
	I0108 21:21:34.431962  252113 system_pods.go:74] duration metric: took 8.820242ms to wait for pod list to return data ...
	I0108 21:21:34.431976  252113 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:21:34.436028  252113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:21:34.436084  252113 node_conditions.go:123] node cpu capacity is 8
	I0108 21:21:34.436102  252113 node_conditions.go:105] duration metric: took 4.121302ms to run NodePressure ...
	I0108 21:21:34.436123  252113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:21:34.580079  252113 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:21:34.583873  252113 kubeadm.go:778] kubelet initialised
	I0108 21:21:34.583892  252113 kubeadm.go:779] duration metric: took 3.792429ms waiting for restarted kubelet to initialise ...
	I0108 21:21:34.583900  252113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:21:34.589069  252113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-phg9v" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.593053  252113 pod_ready.go:92] pod "coredns-565d847f94-phg9v" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:34.593070  252113 pod_ready.go:81] duration metric: took 3.977273ms waiting for pod "coredns-565d847f94-phg9v" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.593079  252113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:34.448121  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:36.947638  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:33.602189  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.602598  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:38.102223  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:35.811248  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:38.309824  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:36.603328  252113 pod_ready.go:102] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:39.102636  252113 pod_ready.go:102] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:39.448188  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:41.448816  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:40.602552  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:43.103011  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:40.310117  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:42.310275  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:41.102721  252113 pod_ready.go:92] pod "etcd-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:41.102749  252113 pod_ready.go:81] duration metric: took 6.509663521s waiting for pod "etcd-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.102765  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.107086  252113 pod_ready.go:92] pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:41.107102  252113 pod_ready.go:81] duration metric: took 4.330679ms waiting for pod "kube-apiserver-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:41.107110  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:43.117124  252113 pod_ready.go:102] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:45.616162  252113 pod_ready.go:102] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:43.947639  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.948111  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:47.948466  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:45.603311  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:48.102009  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:44.809516  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.809649  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:48.810315  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:46.116423  252113 pod_ready.go:92] pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:46.116450  252113 pod_ready.go:81] duration metric: took 5.00933349s waiting for pod "kube-controller-manager-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.116461  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggxgh" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.120605  252113 pod_ready.go:92] pod "kube-proxy-ggxgh" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:46.120624  252113 pod_ready.go:81] duration metric: took 4.157414ms waiting for pod "kube-proxy-ggxgh" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:46.120633  252113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:47.630047  252113 pod_ready.go:92] pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace has status "Ready":"True"
	I0108 21:21:47.630074  252113 pod_ready.go:81] duration metric: took 1.509435424s waiting for pod "kube-scheduler-embed-certs-211950" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:47.630084  252113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace to be "Ready" ...
	I0108 21:21:49.639550  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:50.447460  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:52.448611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:50.102594  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:52.601892  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:51.309943  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:53.809699  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:52.139845  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:54.639170  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:54.947665  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:56.947700  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:54.602771  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:57.101897  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:21:56.310210  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:58.809748  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:21:57.141151  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:59.639435  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:21:58.949756  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:01.448451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:21:59.101923  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:01.101962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:03.102909  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:00.810593  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:03.310194  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:02.139593  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:04.639219  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:03.947604  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.948211  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:05.602550  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:07.602641  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:05.809750  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:07.809939  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:06.639683  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:09.139384  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:08.447451  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.448497  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:12.948218  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:10.102194  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:12.102500  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:10.309596  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:12.309705  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:11.140038  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:13.639053  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:15.639818  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:15.449977  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:17.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:14.102962  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:16.602092  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:14.310432  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:16.810413  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:18.139157  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:20.139713  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:19.947707  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:21.948479  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:19.102762  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:21.602905  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:19.309811  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:21.309972  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:23.810232  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:22.140404  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:24.142445  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:24.447621  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:26.448004  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:24.102645  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:26.602186  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:25.810410  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:28.310174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:26.639220  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:28.640090  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:28.947732  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:31.448269  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:28.602252  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:31.102800  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:30.310481  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:32.311174  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:31.139684  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:33.140111  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:35.639008  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:33.948439  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:36.448349  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:33.602137  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:36.101829  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:38.102615  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:34.810711  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.310466  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:37.639384  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:39.639644  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:38.948577  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:41.447813  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:40.102951  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:42.104260  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:39.810530  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.309510  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:42.141404  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:44.639565  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:43.448406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:45.947675  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:47.948164  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:44.602836  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:47.102043  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:44.310568  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.809625  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:48.809973  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:46.640262  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:49.139383  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:50.448245  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:52.948450  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:49.102979  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:51.601953  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:50.810329  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:53.310062  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:51.639284  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:54.139362  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:55.447735  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:57.448306  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:22:53.602267  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.602823  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:58.101977  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:22:55.810600  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:58.310283  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:22:56.139671  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:58.639895  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:59.448595  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:01.448628  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:00.102056  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:02.602631  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:00.310562  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:02.810458  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:01.139847  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:03.140497  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:05.639659  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:05.102809  234278 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:23:06.104150  234278 node_ready.go:38] duration metric: took 4m0.01108953s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:23:06.106363  234278 out.go:177] 
	W0108 21:23:06.107917  234278 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:06.107934  234278 out.go:239] * 
	W0108 21:23:06.108813  234278 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:06.110698  234278 out.go:177] 
	I0108 21:23:03.948469  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:06.448911  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:05.310071  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:07.310184  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:08.140432  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:10.639318  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:08.947910  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:11.447912  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:09.810127  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:12.310383  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:12.640406  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:15.138954  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:13.448469  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:15.947494  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:17.948406  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:14.310499  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:16.809792  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:18.810138  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:17.141830  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:19.639941  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:20.447825  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:22.448183  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:21.309926  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:23.310592  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:22.139220  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:24.139989  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:24.448405  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:26.948163  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:25.810236  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:27.810585  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:26.639866  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:29.140483  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:29.447764  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:31.448133  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:30.309806  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:32.309936  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:31.140935  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:33.639522  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:33.448453  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:35.947611  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:37.947842  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:34.809756  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:37.310339  238176 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:23:38.812410  238176 node_ready.go:38] duration metric: took 4m0.228660027s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:23:38.814872  238176 out.go:177] 
	W0108 21:23:38.817068  238176 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:23:38.817087  238176 out.go:239] * 
	W0108 21:23:38.817914  238176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:23:38.820219  238176 out.go:177] 
	I0108 21:23:36.139577  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:38.139869  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:40.640622  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:40.447666  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:42.447882  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:43.139185  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:45.139501  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:44.448094  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:46.448207  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:47.639387  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:49.639743  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:48.948453  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:51.448318  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:52.139558  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:54.139627  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:53.448748  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:55.948302  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:57.948356  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:23:56.139927  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:23:58.639716  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:00.447631  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:02.448258  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:01.139624  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:03.139715  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:05.639221  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:04.947683  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:06.948594  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:08.139229  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:10.139571  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:09.448299  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:11.448411  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:12.639812  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:15.140501  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:13.948419  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:16.447752  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:17.639431  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:19.639647  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:18.448212  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:20.948454  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:22.140094  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:24.639169  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:23.448057  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:25.948520  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:26.640220  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:29.139436  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:28.448303  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:30.948633  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:31.139580  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:33.140600  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:35.639699  252113 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-szzjr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:24:33.448497  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:35.448648  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:37.448697  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:39.948153  245190 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:24:39.950285  245190 node_ready.go:38] duration metric: took 4m0.008748338s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:24:39.952493  245190 out.go:177] 
	W0108 21:24:39.954000  245190 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:24:39.954018  245190 out.go:239] * 
	W0108 21:24:39.954886  245190 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:24:39.956750  245190 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	26b22a04bdb01       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   ec976b233877d
	1fa79460d9970       d6e3e26021b60       4 minutes ago        Exited              kindnet-cni               0                   ec976b233877d
	7bd93fc5f6581       beaaf00edd38a       4 minutes ago        Running             kube-proxy                0                   024e28d63934a
	26d1b1e130787       6d23ec0e8b87e       4 minutes ago        Running             kube-scheduler            0                   4dc05b9437d19
	581d92e607165       0346dbd74bcb9       4 minutes ago        Running             kube-apiserver            0                   72e3dc94d266d
	e519152964881       a8a176a5d5d69       4 minutes ago        Running             etcd                      0                   559e4f8929fdb
	b7739474207ce       6039992312758       4 minutes ago        Running             kube-controller-manager   0                   88b0b0b5461c4
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:20:01 UTC, end at Sun 2023-01-08 21:24:40 UTC. --
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.068994388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.069007053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.069223368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/024e28d63934ac52949c151f7829a693d67d5a8193a391a55d8b65d2b8150ccf pid=1702 runtime=io.containerd.runc.v2
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.070942966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.071017309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.071032874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.071222255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc pid=1710 runtime=io.containerd.runc.v2
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.133877087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hz8lw,Uid:fa7c0714-1e45-4256-9383-976e79d1e49e,Namespace:kube-system,Attempt:0,} returns sandbox id \"024e28d63934ac52949c151f7829a693d67d5a8193a391a55d8b65d2b8150ccf\""
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.136764813Z" level=info msg="CreateContainer within sandbox \"024e28d63934ac52949c151f7829a693d67d5a8193a391a55d8b65d2b8150ccf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.152327453Z" level=info msg="CreateContainer within sandbox \"024e28d63934ac52949c151f7829a693d67d5a8193a391a55d8b65d2b8150ccf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc\""
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.153011678Z" level=info msg="StartContainer for \"7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc\""
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.214832688Z" level=info msg="StartContainer for \"7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc\" returns successfully"
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.411636317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-52cqk,Uid:4ae6659c-e68a-492e-9e3f-5ffb047114c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\""
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.414563211Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.428779962Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\""
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.429406085Z" level=info msg="StartContainer for \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\""
	Jan 08 21:20:39 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:20:39.528665042Z" level=info msg="StartContainer for \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\" returns successfully"
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.052879728Z" level=info msg="shim disconnected" id=1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.052933600Z" level=warning msg="cleaning up after shim disconnected" id=1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415 namespace=k8s.io
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.052943357Z" level=info msg="cleaning up dead shim"
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.061943912Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:23:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2126 runtime=io.containerd.runc.v2\n"
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.591022890Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.608231463Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\""
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.608815702Z" level=info msg="StartContainer for \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\""
	Jan 08 21:23:20 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:23:20.727990043Z" level=info msg="StartContainer for \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-211952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-211952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=default-k8s-diff-port-211952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_20_27_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:20:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-211952
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:24:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:20:36 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:20:36 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:20:36 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:20:36 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-diff-port-211952
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                fe5ecc0a-a17f-4998-8022-5b0438ac303f
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-211952                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-52cqk                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-default-k8s-diff-port-211952             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-211952    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-proxy-hz8lw                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-default-k8s-diff-port-211952             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m22s)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x5 over 4m22s)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x5 over 4m22s)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s                  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s                  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s                  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node default-k8s-diff-port-211952 event: Registered Node default-k8s-diff-port-211952 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa] <==
	* {"level":"info","ts":"2023-01-08T21:20:20.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-01-08T21:20:20.222Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-01-08T21:20:20.223Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-diff-port-211952 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:20:20.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  21:24:41 up  1:07,  0 users,  load average: 0.20, 0.94, 1.54
	Linux default-k8s-diff-port-211952 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d] <==
	* I0108 21:20:23.209769       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:20:23.209857       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:20:23.210244       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:20:23.210328       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:20:23.215976       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:20:23.218195       1 controller.go:616] quota admission added evaluator for: namespaces
	I0108 21:20:23.254453       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:20:23.310287       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:20:23.838717       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:20:24.058948       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:20:24.061828       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:20:24.061850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:20:24.399270       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:20:24.428887       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:20:24.527386       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0108 21:20:24.532706       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0108 21:20:24.533803       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:20:24.537243       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:20:25.141317       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:20:25.989727       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:20:25.999258       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0108 21:20:26.006195       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:20:26.084892       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:20:38.698379       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:20:38.849178       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d] <==
	* I0108 21:20:37.996090       1 shared_informer.go:262] Caches are synced for crt configmap
	I0108 21:20:38.001667       1 shared_informer.go:262] Caches are synced for node
	I0108 21:20:38.001690       1 range_allocator.go:166] Starting range CIDR allocator
	I0108 21:20:38.001704       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0108 21:20:38.001715       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0108 21:20:38.006243       1 range_allocator.go:367] Set node default-k8s-diff-port-211952 PodCIDR to [10.244.0.0/24]
	I0108 21:20:38.017669       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 21:20:38.042143       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 21:20:38.045285       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:20:38.090843       1 shared_informer.go:262] Caches are synced for deployment
	I0108 21:20:38.090843       1 shared_informer.go:262] Caches are synced for disruption
	I0108 21:20:38.141757       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:20:38.143906       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 21:20:38.182717       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0108 21:20:38.200627       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:20:38.504455       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:20:38.504480       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:20:38.519923       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:20:38.705861       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hz8lw"
	I0108 21:20:38.708993       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-52cqk"
	I0108 21:20:38.851131       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0108 21:20:39.000111       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-fd94f"
	I0108 21:20:39.004180       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-w786w"
	I0108 21:20:39.370532       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0108 21:20:39.379431       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-w786w"
	
	* 
	* ==> kube-proxy [7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc] <==
	* I0108 21:20:39.252698       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0108 21:20:39.252848       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0108 21:20:39.252879       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:20:39.273356       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:20:39.273390       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:20:39.273401       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:20:39.273419       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:20:39.273461       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:20:39.273614       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:20:39.273852       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:20:39.273873       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:20:39.274443       1 config.go:317] "Starting service config controller"
	I0108 21:20:39.274469       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:20:39.274476       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:20:39.274496       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:20:39.274534       1 config.go:444] "Starting node config controller"
	I0108 21:20:39.274554       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:20:39.375304       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 21:20:39.375333       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:20:39.375369       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225] <==
	* W0108 21:20:23.231531       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:20:23.235629       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:20:23.231729       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:20:23.235656       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:20:23.231874       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:20:23.235675       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:20:23.233627       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:23.235694       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:20:23.233737       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:20:23.235714       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:20:23.233741       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:20:23.235733       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:20:23.234883       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:23.235751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:20:24.073855       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:20:24.073894       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:20:24.079980       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:20:24.080020       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:20:24.108284       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:20:24.108322       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:20:24.169681       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:20:24.169717       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:20:24.247187       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:24.247220       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0108 21:20:26.327263       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:20:01 UTC, end at Sun 2023-01-08 21:24:41 UTC. --
	Jan 08 21:22:41 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:22:41.401433    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:46 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:22:46.402941    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:51 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:22:51.403948    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:22:56 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:22:56.405190    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:01 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:01.406221    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:06 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:06.407760    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:11 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:11.408634    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:16 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:16.409475    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:20 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:23:20.588831    1322 scope.go:115] "RemoveContainer" containerID="1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415"
	Jan 08 21:23:21 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:21.410769    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:26 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:26.411658    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:31 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:31.412863    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:36 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:36.414280    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:41 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:41.415228    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:46 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:46.415982    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:51 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:51.417254    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:23:56 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:23:56.418864    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:01 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:01.420053    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:06 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:06.421533    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:11 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:11.423043    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:16 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:16.424342    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:21 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:21.425170    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:26 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:26.426901    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:31 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:31.428556    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:24:36 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:24:36.429957    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-fd94f storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-fd94f storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-fd94f storage-provisioner: exit status 1 (60.308364ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-fd94f" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-fd94f storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (288.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (484.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-211828 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [28b05d09-7968-40ce-a457-046da3b85782] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E0108 21:23:14.521977   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:23:18.423758   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:23:35.002146   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:23:36.691865   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:36.697118   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:36.707347   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:36.727584   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:36.767868   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:36.848180   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:37.008594   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:37.329663   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:37.970142   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828
start_stop_delete_test.go:196: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2023-01-08 21:31:08.679065879 +0000 UTC m=+3830.822238171
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-211828 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context old-k8s-version-211828 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jz8cr (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
default-token-jz8cr:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-jz8cr
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  8m                  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  6m52s (x1 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-211828 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context old-k8s-version-211828 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-211828
helpers_test.go:235: (dbg) docker inspect old-k8s-version-211828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9",
	        "Created": "2023-01-08T21:18:34.933200191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:18:35.293925019Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hosts",
	        "LogPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9-json.log",
	        "Name": "/old-k8s-version-211828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-211828:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-211828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-211828",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-211828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-211828",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd7a2d331da5df8a5ad26b1a11ef8071062a8308e1e900de389b1fcbf053e8d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd7a2d331da5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-211828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f66150df9bfb",
	                        "old-k8s-version-211828"
	                    ],
	                    "NetworkID": "e48a739a7de53b0a2a21ddeaf3e573efe5bbf8c41c6a15cbe1e7c39d0f359d82",
	                    "EndpointID": "b0b05a18f751ba3ee859f73690ebd1a61bca7d47388946fae5701f1b0d051310",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25: (1.046216137s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| delete  | -p bridge-210619                                           | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| delete  | -p calico-210619                                           | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| start   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:20 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950                | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950                     | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:26 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-211950 sudo                                 | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:27:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:27:28.765231  268133 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:27:28.765330  268133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:27:28.765338  268133 out.go:309] Setting ErrFile to fd 2...
	I0108 21:27:28.765343  268133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:27:28.765439  268133 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:27:28.765964  268133 out.go:303] Setting JSON to false
	I0108 21:27:28.767361  268133 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4198,"bootTime":1673209051,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:27:28.767426  268133 start.go:135] virtualization: kvm guest
	I0108 21:27:28.770169  268133 out.go:177] * [newest-cni-212639] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:27:28.771838  268133 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:27:28.771763  268133 notify.go:220] Checking for updates...
	I0108 21:27:28.773655  268133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:27:28.775212  268133 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:27:28.776901  268133 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:27:28.778479  268133 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:27:28.780293  268133 config.go:180] Loaded profile config "newest-cni-212639": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:27:28.780707  268133 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:27:28.810535  268133 docker.go:137] docker version: linux-20.10.22
	I0108 21:27:28.810637  268133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:27:28.914574  268133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:27:28.831108877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:27:28.914685  268133 docker.go:254] overlay module found
	I0108 21:27:28.917116  268133 out.go:177] * Using the docker driver based on existing profile
	I0108 21:27:28.918812  268133 start.go:294] selected driver: docker
	I0108 21:27:28.918827  268133 start.go:838] validating driver "docker" against &{Name:newest-cni-212639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:27:28.918922  268133 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:27:28.919894  268133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:27:29.021048  268133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:27:28.941418387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:27:29.021321  268133 start_flags.go:929] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 21:27:29.021343  268133 cni.go:95] Creating CNI manager for ""
	I0108 21:27:29.021349  268133 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:27:29.021359  268133 start_flags.go:317] config:
	{Name:newest-cni-212639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:27:29.023686  268133 out.go:177] * Starting control plane node newest-cni-212639 in cluster newest-cni-212639
	I0108 21:27:29.025417  268133 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:27:29.027205  268133 out.go:177] * Pulling base image ...
	I0108 21:27:29.028641  268133 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:27:29.028684  268133 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:27:29.028697  268133 cache.go:57] Caching tarball of preloaded images
	I0108 21:27:29.028756  268133 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:27:29.028902  268133 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:27:29.028921  268133 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:27:29.029029  268133 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/config.json ...
	I0108 21:27:29.055564  268133 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:27:29.055586  268133 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:27:29.055611  268133 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:27:29.055663  268133 start.go:364] acquiring machines lock for newest-cni-212639: {Name:mkda646b62b7d9c9186158724cd7269b307eb11f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:27:29.055782  268133 start.go:368] acquired machines lock for "newest-cni-212639" in 80.336µs
	I0108 21:27:29.055807  268133 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:27:29.055814  268133 fix.go:55] fixHost starting: 
	I0108 21:27:29.056094  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:29.080961  268133 fix.go:103] recreateIfNeeded on newest-cni-212639: state=Stopped err=<nil>
	W0108 21:27:29.080991  268133 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:27:29.083655  268133 out.go:177] * Restarting existing docker container for "newest-cni-212639" ...
	I0108 21:27:29.085590  268133 cli_runner.go:164] Run: docker start newest-cni-212639
	I0108 21:27:29.466862  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:29.495174  268133 kic.go:415] container "newest-cni-212639" state is running.
	I0108 21:27:29.495620  268133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-212639
	I0108 21:27:29.520835  268133 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/config.json ...
	I0108 21:27:29.521069  268133 machine.go:88] provisioning docker machine ...
	I0108 21:27:29.521093  268133 ubuntu.go:169] provisioning hostname "newest-cni-212639"
	I0108 21:27:29.521172  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:29.548259  268133 main.go:134] libmachine: Using SSH client type: native
	I0108 21:27:29.548454  268133 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33042 <nil> <nil>}
	I0108 21:27:29.548474  268133 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-212639 && echo "newest-cni-212639" | sudo tee /etc/hostname
	I0108 21:27:29.549137  268133 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60116->127.0.0.1:33042: read: connection reset by peer
	I0108 21:27:32.679980  268133 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-212639
	
	I0108 21:27:32.680064  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:32.704822  268133 main.go:134] libmachine: Using SSH client type: native
	I0108 21:27:32.705004  268133 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33042 <nil> <nil>}
	I0108 21:27:32.705035  268133 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-212639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-212639/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-212639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:27:32.819191  268133 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:27:32.819229  268133 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:27:32.819265  268133 ubuntu.go:177] setting up certificates
	I0108 21:27:32.819276  268133 provision.go:83] configureAuth start
	I0108 21:27:32.819333  268133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-212639
	I0108 21:27:32.844313  268133 provision.go:138] copyHostCerts
	I0108 21:27:32.844376  268133 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:27:32.844396  268133 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:27:32.844470  268133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:27:32.844566  268133 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:27:32.844578  268133 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:27:32.844619  268133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:27:32.844692  268133 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:27:32.844702  268133 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:27:32.844737  268133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:27:32.844795  268133 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.newest-cni-212639 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-212639]
	I0108 21:27:33.024245  268133 provision.go:172] copyRemoteCerts
	I0108 21:27:33.024298  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:27:33.024333  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.049410  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.134845  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:27:33.156477  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:27:33.175248  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:27:33.192199  268133 provision.go:86] duration metric: configureAuth took 372.90657ms
	I0108 21:27:33.192222  268133 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:27:33.192381  268133 config.go:180] Loaded profile config "newest-cni-212639": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:27:33.192391  268133 machine.go:91] provisioned docker machine in 3.671308135s
	I0108 21:27:33.192397  268133 start.go:300] post-start starting for "newest-cni-212639" (driver="docker")
	I0108 21:27:33.192403  268133 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:27:33.192436  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:27:33.192465  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.217627  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.303289  268133 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:27:33.306127  268133 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:27:33.306149  268133 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:27:33.306158  268133 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:27:33.306163  268133 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:27:33.306171  268133 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:27:33.306212  268133 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:27:33.306273  268133 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:27:33.306347  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:27:33.314075  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:27:33.331036  268133 start.go:303] post-start completed in 138.628109ms
	I0108 21:27:33.331098  268133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:27:33.331127  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.356822  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.440301  268133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:27:33.444254  268133 fix.go:57] fixHost completed within 4.388434416s
	I0108 21:27:33.444285  268133 start.go:83] releasing machines lock for "newest-cni-212639", held for 4.388486617s
	I0108 21:27:33.444379  268133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-212639
	I0108 21:27:33.469347  268133 ssh_runner.go:195] Run: cat /version.json
	I0108 21:27:33.469403  268133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:27:33.469413  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.469471  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.494776  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.498368  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.605894  268133 ssh_runner.go:195] Run: systemctl --version
	I0108 21:27:33.609960  268133 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:27:33.621087  268133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:27:33.630592  268133 docker.go:189] disabling docker service ...
	I0108 21:27:33.630638  268133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:27:33.640514  268133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:27:33.650036  268133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:27:33.726594  268133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:27:33.798762  268133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:27:33.807686  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:27:33.820033  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:27:33.827874  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:27:33.835458  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:27:33.842952  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:27:33.850460  268133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:27:33.856397  268133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:27:33.862289  268133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:27:33.939123  268133 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:27:34.004820  268133 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:27:34.004891  268133 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:27:34.008336  268133 start.go:472] Will wait 60s for crictl version
	I0108 21:27:34.008383  268133 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:27:34.034919  268133 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:27:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:27:45.082906  268133 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:27:45.105678  268133 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:27:45.105733  268133 ssh_runner.go:195] Run: containerd --version
	I0108 21:27:45.129490  268133 ssh_runner.go:195] Run: containerd --version
	I0108 21:27:45.154616  268133 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:27:45.156205  268133 cli_runner.go:164] Run: docker network inspect newest-cni-212639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:27:45.179618  268133 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0108 21:27:45.182793  268133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:45.194213  268133 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0108 21:27:45.195893  268133 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:27:45.195991  268133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:27:45.219527  268133 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:27:45.219551  268133 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:27:45.219596  268133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:27:45.244817  268133 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:27:45.244842  268133 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:27:45.244912  268133 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:27:45.269367  268133 cni.go:95] Creating CNI manager for ""
	I0108 21:27:45.269386  268133 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:27:45.269398  268133 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0108 21:27:45.269413  268133 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-212639 NodeName:newest-cni-212639 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feat
ureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:27:45.269552  268133 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-212639"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:27:45.269652  268133 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-212639 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:27:45.269696  268133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:27:45.276883  268133 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:27:45.276956  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:27:45.283579  268133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (547 bytes)
	I0108 21:27:45.295934  268133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:27:45.309249  268133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0108 21:27:45.322187  268133 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:27:45.325139  268133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:45.334399  268133 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639 for IP: 192.168.94.2
	I0108 21:27:45.334513  268133 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:27:45.334548  268133 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:27:45.334607  268133 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/client.key
	I0108 21:27:45.334656  268133 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/apiserver.key.ad8e880a
	I0108 21:27:45.334687  268133 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/proxy-client.key
	I0108 21:27:45.334779  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:27:45.334804  268133 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:27:45.334815  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:27:45.334838  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:27:45.334859  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:27:45.334883  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:27:45.334918  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:27:45.335569  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:27:45.353098  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:27:45.369628  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:27:45.387573  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:27:45.405221  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:27:45.422744  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:27:45.439330  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:27:45.456759  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:27:45.473190  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:27:45.489674  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:27:45.507995  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:27:45.524758  268133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:27:45.537439  268133 ssh_runner.go:195] Run: openssl version
	I0108 21:27:45.542325  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:27:45.549667  268133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:45.552593  268133 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:45.552630  268133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:45.557399  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:27:45.564013  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:27:45.570969  268133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:27:45.573973  268133 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:27:45.574010  268133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:27:45.578719  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:27:45.585541  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:27:45.592761  268133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:27:45.595725  268133 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:27:45.595771  268133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:27:45.601221  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:27:45.607901  268133 kubeadm.go:396] StartCluster: {Name:newest-cni-212639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:27:45.608005  268133 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:27:45.608052  268133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:27:45.633536  268133 cri.go:87] found id: "78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37"
	I0108 21:27:45.633570  268133 cri.go:87] found id: "1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430"
	I0108 21:27:45.633583  268133 cri.go:87] found id: "95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e"
	I0108 21:27:45.633593  268133 cri.go:87] found id: "b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2"
	I0108 21:27:45.633606  268133 cri.go:87] found id: "da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b"
	I0108 21:27:45.633617  268133 cri.go:87] found id: "9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47"
	I0108 21:27:45.633626  268133 cri.go:87] found id: ""
	I0108 21:27:45.633669  268133 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:27:45.644525  268133 cri.go:114] JSON = null
	W0108 21:27:45.644569  268133 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:27:45.644606  268133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:27:45.651264  268133 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:27:45.651283  268133 kubeadm.go:627] restartCluster start
	I0108 21:27:45.651318  268133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:27:45.657485  268133 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:45.658187  268133 kubeconfig.go:135] verify returned: extract IP: "newest-cni-212639" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:27:45.658612  268133 kubeconfig.go:146] "newest-cni-212639" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:27:45.659221  268133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:45.660667  268133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:27:45.666838  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:45.666908  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:45.674578  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:45.874963  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:45.875029  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:45.883431  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.074677  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.074754  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.083708  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.274982  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.275079  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.283693  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.474959  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.475029  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.483304  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.675620  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.675725  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.684152  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.875455  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.875561  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.884037  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.075372  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.075460  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.084339  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.275632  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.275732  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.284201  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.475551  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.475642  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.484190  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.675533  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.675634  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.684085  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.875368  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.875467  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.884031  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.075330  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.075405  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.084246  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.275538  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.275627  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.283816  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.475063  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.475141  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.483695  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.675026  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.675107  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.683990  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.684014  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.684049  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.692348  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.692387  268133 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:27:48.692395  268133 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:27:48.692407  268133 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:27:48.692452  268133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:27:48.716223  268133 cri.go:87] found id: "78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37"
	I0108 21:27:48.716249  268133 cri.go:87] found id: "1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430"
	I0108 21:27:48.716255  268133 cri.go:87] found id: "95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e"
	I0108 21:27:48.716261  268133 cri.go:87] found id: "b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2"
	I0108 21:27:48.716267  268133 cri.go:87] found id: "da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b"
	I0108 21:27:48.716280  268133 cri.go:87] found id: "9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47"
	I0108 21:27:48.716290  268133 cri.go:87] found id: ""
	I0108 21:27:48.716300  268133 cri.go:232] Stopping containers: [78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37 1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430 95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2 da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b 9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47]
	I0108 21:27:48.716363  268133 ssh_runner.go:195] Run: which crictl
	I0108 21:27:48.719338  268133 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37 1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430 95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2 da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b 9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47
	I0108 21:27:48.745901  268133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:27:48.755804  268133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:27:48.762868  268133 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 21:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:27 /etc/kubernetes/scheduler.conf
	
	I0108 21:27:48.762934  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:27:48.770114  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:27:48.776890  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:27:48.783617  268133 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.783684  268133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:27:48.790230  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:27:48.797262  268133 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.797322  268133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:27:48.803754  268133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:27:48.810579  268133 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:27:48.810604  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:48.854700  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:49.955777  268133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100955115s)
	I0108 21:27:49.955819  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:50.098599  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:50.149752  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:50.243811  268133 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:27:50.243870  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:50.754110  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:51.254107  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:51.320481  268133 api_server.go:71] duration metric: took 1.076675567s to wait for apiserver process to appear ...
	I0108 21:27:51.320509  268133 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:27:51.320518  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:51.320879  268133 api_server.go:268] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0108 21:27:51.821576  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:54.685150  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:27:54.685191  268133 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:27:54.821404  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:54.828317  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:27:54.828346  268133 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:27:55.320971  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:55.325370  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:27:55.325392  268133 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:27:55.820972  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:55.825305  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:27:55.830801  268133 api_server.go:140] control plane version: v1.25.3
	I0108 21:27:55.830824  268133 api_server.go:130] duration metric: took 4.510310193s to wait for apiserver health ...
	I0108 21:27:55.830833  268133 cni.go:95] Creating CNI manager for ""
	I0108 21:27:55.830842  268133 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:27:55.833222  268133 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:27:55.835019  268133 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:27:55.839173  268133 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:27:55.839213  268133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:27:55.852687  268133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:27:56.681804  268133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:27:56.691379  268133 system_pods.go:59] 9 kube-system pods found
	I0108 21:27:56.691412  268133 system_pods.go:61] "coredns-565d847f94-jlgss" [383bbf49-200e-4180-9174-07b6e59ff237] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.691438  268133 system_pods.go:61] "etcd-newest-cni-212639" [0b37c532-d886-462c-955b-a24131a077f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:27:56.691452  268133 system_pods.go:61] "kindnet-b2t2w" [01142b5d-96e1-480a-98f6-4e7f5f90cc73] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:27:56.691461  268133 system_pods.go:61] "kube-apiserver-newest-cni-212639" [ca27e73e-9789-4186-9761-5d5c9c077bf0] Running
	I0108 21:27:56.691490  268133 system_pods.go:61] "kube-controller-manager-newest-cni-212639" [1e0c0317-02ac-4013-b05a-2ece64459b36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:27:56.691499  268133 system_pods.go:61] "kube-proxy-9dkpd" [6c10127e-fb6d-479c-9f1a-002abe670b1d] Running
	I0108 21:27:56.691508  268133 system_pods.go:61] "kube-scheduler-newest-cni-212639" [1b487550-9f18-460e-816c-ddcbf1d8ff5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:27:56.691519  268133 system_pods.go:61] "metrics-server-5c8fd5cf8-zn4gr" [9713d296-cf3d-40f3-b710-08eaf7d22988] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.691533  268133 system_pods.go:61] "storage-provisioner" [6f8b2d5a-0a17-461f-891e-dd4c4ccc7006] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.691543  268133 system_pods.go:74] duration metric: took 9.717294ms to wait for pod list to return data ...
	I0108 21:27:56.691556  268133 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:27:56.696614  268133 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:27:56.696637  268133 node_conditions.go:123] node cpu capacity is 8
	I0108 21:27:56.696647  268133 node_conditions.go:105] duration metric: took 5.086053ms to run NodePressure ...
	I0108 21:27:56.696663  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:56.828302  268133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:27:56.835024  268133 ops.go:34] apiserver oom_adj: -16
	I0108 21:27:56.835047  268133 kubeadm.go:631] restartCluster took 11.183758492s
	I0108 21:27:56.835057  268133 kubeadm.go:398] StartCluster complete in 11.227163302s
	I0108 21:27:56.835074  268133 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:56.835156  268133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:27:56.836399  268133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:56.840086  268133 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-212639" rescaled to 1
	I0108 21:27:56.840139  268133 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:27:56.842342  268133 out.go:177] * Verifying Kubernetes components...
	I0108 21:27:56.840188  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:27:56.840203  268133 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0108 21:27:56.840364  268133 config.go:180] Loaded profile config "newest-cni-212639": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:27:56.844062  268133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:27:56.844103  268133 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-212639"
	I0108 21:27:56.844111  268133 addons.go:65] Setting default-storageclass=true in profile "newest-cni-212639"
	I0108 21:27:56.844126  268133 addons.go:65] Setting metrics-server=true in profile "newest-cni-212639"
	I0108 21:27:56.844127  268133 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-212639"
	I0108 21:27:56.844131  268133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-212639"
	W0108 21:27:56.844139  268133 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:27:56.844142  268133 addons.go:227] Setting addon metrics-server=true in "newest-cni-212639"
	I0108 21:27:56.844152  268133 addons.go:65] Setting dashboard=true in profile "newest-cni-212639"
	W0108 21:27:56.844159  268133 addons.go:236] addon metrics-server should already be in state true
	I0108 21:27:56.844169  268133 addons.go:227] Setting addon dashboard=true in "newest-cni-212639"
	W0108 21:27:56.844176  268133 addons.go:236] addon dashboard should already be in state true
	I0108 21:27:56.844186  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.844190  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.844217  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.844457  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.844614  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.844642  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.844615  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.856799  268133 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:27:56.856857  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:56.893927  268133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:27:56.895774  268133 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:27:56.897447  268133 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:27:56.899092  268133 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:27:56.900745  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:27:56.900759  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:27:56.900805  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.899109  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:27:56.902469  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.904190  268133 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:27:56.906038  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:27:56.906056  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:27:56.906095  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.913846  268133 addons.go:227] Setting addon default-storageclass=true in "newest-cni-212639"
	W0108 21:27:56.913869  268133 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:27:56.913898  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.914336  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.940644  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.944505  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.950971  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.956692  268133 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:27:56.956715  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:27:56.956765  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.970097  268133 api_server.go:71] duration metric: took 129.918632ms to wait for apiserver process to appear ...
	I0108 21:27:56.970122  268133 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:27:56.970131  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:56.971086  268133 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:27:56.976505  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:27:56.977444  268133 api_server.go:140] control plane version: v1.25.3
	I0108 21:27:56.977502  268133 api_server.go:130] duration metric: took 7.373609ms to wait for apiserver health ...
	I0108 21:27:56.977524  268133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:27:56.984004  268133 system_pods.go:59] 9 kube-system pods found
	I0108 21:27:56.984038  268133 system_pods.go:61] "coredns-565d847f94-jlgss" [383bbf49-200e-4180-9174-07b6e59ff237] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.984051  268133 system_pods.go:61] "etcd-newest-cni-212639" [0b37c532-d886-462c-955b-a24131a077f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:27:56.984062  268133 system_pods.go:61] "kindnet-b2t2w" [01142b5d-96e1-480a-98f6-4e7f5f90cc73] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:27:56.984071  268133 system_pods.go:61] "kube-apiserver-newest-cni-212639" [ca27e73e-9789-4186-9761-5d5c9c077bf0] Running
	I0108 21:27:56.984081  268133 system_pods.go:61] "kube-controller-manager-newest-cni-212639" [1e0c0317-02ac-4013-b05a-2ece64459b36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:27:56.984092  268133 system_pods.go:61] "kube-proxy-9dkpd" [6c10127e-fb6d-479c-9f1a-002abe670b1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:27:56.984101  268133 system_pods.go:61] "kube-scheduler-newest-cni-212639" [1b487550-9f18-460e-816c-ddcbf1d8ff5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:27:56.984110  268133 system_pods.go:61] "metrics-server-5c8fd5cf8-zn4gr" [9713d296-cf3d-40f3-b710-08eaf7d22988] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.984118  268133 system_pods.go:61] "storage-provisioner" [6f8b2d5a-0a17-461f-891e-dd4c4ccc7006] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.984125  268133 system_pods.go:74] duration metric: took 6.587949ms to wait for pod list to return data ...
	I0108 21:27:56.984135  268133 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:27:56.986465  268133 default_sa.go:45] found service account: "default"
	I0108 21:27:56.986481  268133 default_sa.go:55] duration metric: took 2.340833ms for default service account to be created ...
	I0108 21:27:56.986492  268133 kubeadm.go:573] duration metric: took 146.318184ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0108 21:27:56.986511  268133 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:27:56.987573  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.988961  268133 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:27:56.988981  268133 node_conditions.go:123] node cpu capacity is 8
	I0108 21:27:56.988989  268133 node_conditions.go:105] duration metric: took 2.473937ms to run NodePressure ...
	I0108 21:27:56.988999  268133 start.go:217] waiting for startup goroutines ...
	I0108 21:27:57.042655  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:27:57.042675  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:27:57.049738  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:27:57.057234  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:27:57.057257  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:27:57.066234  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:27:57.066261  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:27:57.071993  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:27:57.072018  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:27:57.111308  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:27:57.111338  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:27:57.120602  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:27:57.126801  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:27:57.127797  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:27:57.127818  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:27:57.144715  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:27:57.144799  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:27:57.223994  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:27:57.224018  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:27:57.243113  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:27:57.243141  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:27:57.325619  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:27:57.325649  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:27:57.343985  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:27:57.344014  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:27:57.429266  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:27:57.429292  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:27:57.445559  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:27:57.742586  268133 addons.go:457] Verifying addon metrics-server=true in "newest-cni-212639"
	I0108 21:27:58.012564  268133 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-212639 addons enable metrics-server	
	
	
	I0108 21:27:58.014398  268133 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0108 21:27:58.016196  268133 addons.go:488] enableAddons completed in 1.17599155s
	I0108 21:27:58.016563  268133 ssh_runner.go:195] Run: rm -f paused
	I0108 21:27:58.072101  268133 start.go:536] kubectl: 1.26.0, cluster: 1.25.3 (minor skew: 1)
	I0108 21:27:58.074305  268133 out.go:177] * Done! kubectl is now configured to use "newest-cni-212639" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ed44b6cd92e88       d6e3e26021b60       3 minutes ago       Exited              kindnet-cni               3                   156951a7e6ad9
	4fdaee2b10f29       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   700fdf969a65f
	a9e20d8377a66       b2756210eeabf       12 minutes ago      Running             etcd                      0                   3177f12cbcc92
	3baeebbc6da60       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   0c6dba6ffda90
	dc587e05c9875       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   6963fcc252763
	18030e6256a0f       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   40f53ffcd3927
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:18:35 UTC, end at Sun 2023-01-08 21:31:09 UTC. --
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.353283530Z" level=warning msg="cleaning up after shim disconnected" id=a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d namespace=k8s.io
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.353297928Z" level=info msg="cleaning up dead shim"
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.362007670Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:24:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3162 runtime=io.containerd.runc.v2\n"
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.796218196Z" level=info msg="RemoveContainer for \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\""
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.803024051Z" level=info msg="RemoveContainer for \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\" returns successfully"
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.233533359Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.246607491Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\""
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.247163693Z" level=info msg="StartContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\""
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.326500700Z" level=info msg="StartContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\" returns successfully"
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.864600123Z" level=info msg="shim disconnected" id=e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.864655647Z" level=warning msg="cleaning up after shim disconnected" id=e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db namespace=k8s.io
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.864665974Z" level=info msg="cleaning up dead shim"
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.873334658Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:27:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3639 runtime=io.containerd.runc.v2\n"
	Jan 08 21:27:25 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:25.050837242Z" level=info msg="RemoveContainer for \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\""
	Jan 08 21:27:25 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:25.060940860Z" level=info msg="RemoveContainer for \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\" returns successfully"
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.239117101Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.252621250Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a\""
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.253259212Z" level=info msg="StartContainer for \"ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a\""
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.329266240Z" level=info msg="StartContainer for \"ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a\" returns successfully"
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.953098822Z" level=info msg="shim disconnected" id=ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.953169572Z" level=warning msg="cleaning up after shim disconnected" id=ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a namespace=k8s.io
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.953188398Z" level=info msg="cleaning up dead shim"
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.962418312Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4128 runtime=io.containerd.runc.v2\n"
	Jan 08 21:30:34 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:34.319837538Z" level=info msg="RemoveContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\""
	Jan 08 21:30:34 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:34.324704126Z" level=info msg="RemoveContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-211828
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-211828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=old-k8s-version-211828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_18_51_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:18:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-211828
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	System Info:
	 Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	 System UUID:                a9413ae7-d165-4b76-a22b-73b89e3e2d6a
	 Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	 Kernel Version:             5.15.0-1025-gcp
	 OS Image:                   Ubuntu 20.04.5 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-211828                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-9z2n8                                     100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-211828             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-211828    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-jqh6r                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-211828             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-211828  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70] <==
	* 2023-01-08 21:18:42.222758 I | raft: ea7e25599daad906 became follower at term 1
	2023-01-08 21:18:42.230174 W | auth: simple token is not cryptographically signed
	2023-01-08 21:18:42.233023 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-01-08 21:18:42.234706 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-01-08 21:18:42.234820 I | embed: listening for metrics on http://192.168.76.2:2381
	2023-01-08 21:18:42.235027 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-01-08 21:18:42.235302 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-01-08 21:18:42.236050 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:18:42.823124 I | raft: ea7e25599daad906 is starting a new election at term 1
	2023-01-08 21:18:42.823163 I | raft: ea7e25599daad906 became candidate at term 2
	2023-01-08 21:18:42.823187 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2023-01-08 21:18:42.823199 I | raft: ea7e25599daad906 became leader at term 2
	2023-01-08 21:18:42.823207 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2023-01-08 21:18:42.823565 I | etcdserver: setting up the initial cluster version to 3.3
	2023-01-08 21:18:42.823593 I | etcdserver: published {Name:old-k8s-version-211828 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:18:42.823608 I | embed: ready to serve client requests
	2023-01-08 21:18:42.823618 I | embed: ready to serve client requests
	2023-01-08 21:18:42.824159 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-01-08 21:18:42.824619 I | etcdserver/api: enabled capabilities for version 3.3
	2023-01-08 21:18:42.825884 I | embed: serving client requests on 192.168.76.2:2379
	2023-01-08 21:18:42.826236 I | embed: serving client requests on 127.0.0.1:2379
	2023-01-08 21:19:59.361725 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (117.600264ms) to execute
	2023-01-08 21:20:00.632412 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (349.431998ms) to execute
	2023-01-08 21:28:42.839663 I | mvcc: store.index: compact 457
	2023-01-08 21:28:42.840457 I | mvcc: finished scheduled compaction at 457 (took 474.586µs)
	
	* 
	* ==> kernel <==
	*  21:31:10 up  1:13,  0 users,  load average: 0.49, 0.66, 1.21
	Linux old-k8s-version-211828 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4] <==
	* I0108 21:18:45.832401       1 naming_controller.go:288] Starting NamingConditionController
	I0108 21:18:45.832488       1 establishing_controller.go:73] Starting EstablishingController
	I0108 21:18:45.832179       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0108 21:18:45.833021       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 21:18:45.931842       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:18:45.932088       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:18:45.932770       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0108 21:18:45.932802       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:18:46.831581       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0108 21:18:46.831614       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 21:18:46.831759       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:18:46.835235       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0108 21:18:46.838488       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:18:46.838509       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0108 21:18:47.618962       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:18:48.612611       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:18:48.892810       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 21:18:49.229017       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0108 21:18:49.229650       1 controller.go:606] quota admission added evaluator for: endpoints
	I0108 21:18:50.129578       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0108 21:18:50.537710       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0108 21:18:50.897501       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:05.581835       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:05.598947       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0108 21:19:05.761910       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d] <==
	* I0108 21:19:05.532786       1 shared_informer.go:204] Caches are synced for HPA 
	I0108 21:19:05.577838       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0108 21:19:05.583510       1 shared_informer.go:204] Caches are synced for stateful set 
	I0108 21:19:05.589186       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a0f32b10-75af-4660-85eb-9e2d60222d15", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-9z2n8
	I0108 21:19:05.591195       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"5562d924-d3c2-495e-8160-7930ac4bed98", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-jqh6r
	E0108 21:19:05.603944       1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"a0f32b10-75af-4660-85eb-9e2d60222d15", ResourceVersion:"226", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808809531, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20221004-44d545d1\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerati
ons\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001002e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:
[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.Vsphere
VirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ec0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolume
Source)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ee0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)
(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.
Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20221004-44d545d1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002f00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002f40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011562d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.Eph
emeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0007111e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011652c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.Resou
rceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00013c870)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000711260)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0108 21:19:05.611711       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"5562d924-d3c2-495e-8160-7930ac4bed98", ResourceVersion:"214", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808809530, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001002da0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a2a980), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002de0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002e20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001156140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000710ed8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001165260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00013c868)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000710f18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0108 21:19:05.726906       1 shared_informer.go:204] Caches are synced for disruption 
	I0108 21:19:05.726931       1 disruption.go:341] Sending events to api server.
	I0108 21:19:05.734015       1 shared_informer.go:204] Caches are synced for resource quota 
	I0108 21:19:05.759951       1 shared_informer.go:204] Caches are synced for deployment 
	I0108 21:19:05.764185       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2341f665-8f22-48e3-9b76-dbd488b1235d", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0108 21:19:05.775680       1 shared_informer.go:204] Caches are synced for resource quota 
	I0108 21:19:05.783488       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0108 21:19:05.787135       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"993e0e3b-0673-4494-853e-0ee4024d61de", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-lm49s
	I0108 21:19:05.788779       1 shared_informer.go:204] Caches are synced for expand 
	I0108 21:19:05.788903       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0108 21:19:05.788923       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:05.803010       1 shared_informer.go:204] Caches are synced for certificate 
	I0108 21:19:05.807809       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0108 21:19:05.833561       1 shared_informer.go:204] Caches are synced for certificate 
	I0108 21:19:05.834019       1 shared_informer.go:204] Caches are synced for attach detach 
	I0108 21:19:05.835463       1 shared_informer.go:204] Caches are synced for PV protection 
	I0108 21:19:05.839437       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0108 21:19:05.850779       1 log.go:172] [INFO] signed certificate with serial number 477651019640136324065142830251145268032180874070
	
	* 
	* ==> kube-proxy [4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25] <==
	* W0108 21:19:06.244257       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 21:19:06.253650       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0108 21:19:06.253708       1 server_others.go:149] Using iptables Proxier.
	I0108 21:19:06.254406       1 server.go:529] Version: v1.16.0
	I0108 21:19:06.255737       1 config.go:131] Starting endpoints config controller
	I0108 21:19:06.255772       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 21:19:06.255807       1 config.go:313] Starting service config controller
	I0108 21:19:06.255831       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 21:19:06.409897       1 shared_informer.go:204] Caches are synced for service config 
	I0108 21:19:06.409933       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330] <==
	* I0108 21:18:45.921798       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0108 21:18:46.015772       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:46.016363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:46.017243       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:46.017333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:46.017720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:46.017731       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:46.017872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:46.018105       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:46.019057       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:46.019300       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:46.020625       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:18:47.017213       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:47.018225       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:47.020195       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:47.020884       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:47.021712       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:47.022941       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:47.023944       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:47.024799       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:47.028700       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:47.029806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:47.031840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:19:06.775194       1 factory.go:585] pod is already present in the activeQ
	E0108 21:23:08.286026       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:18:35 UTC, end at Sun 2023-01-08 21:31:10 UTC. --
	Jan 08 21:29:21 old-k8s-version-211828 kubelet[926]: E0108 21:29:21.505305     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:26 old-k8s-version-211828 kubelet[926]: E0108 21:29:26.505975     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:31 old-k8s-version-211828 kubelet[926]: E0108 21:29:31.506682     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:36 old-k8s-version-211828 kubelet[926]: E0108 21:29:36.507411     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:41 old-k8s-version-211828 kubelet[926]: E0108 21:29:41.508086     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:46 old-k8s-version-211828 kubelet[926]: E0108 21:29:46.508835     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:51 old-k8s-version-211828 kubelet[926]: E0108 21:29:51.509584     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:56 old-k8s-version-211828 kubelet[926]: E0108 21:29:56.510253     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:01 old-k8s-version-211828 kubelet[926]: E0108 21:30:01.511089     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:06 old-k8s-version-211828 kubelet[926]: E0108 21:30:06.511826     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:11 old-k8s-version-211828 kubelet[926]: E0108 21:30:11.512567     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:16 old-k8s-version-211828 kubelet[926]: E0108 21:30:16.513377     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:21 old-k8s-version-211828 kubelet[926]: E0108 21:30:21.514071     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:26 old-k8s-version-211828 kubelet[926]: E0108 21:30:26.514822     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:31 old-k8s-version-211828 kubelet[926]: E0108 21:30:31.515598     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:34 old-k8s-version-211828 kubelet[926]: E0108 21:30:34.319861     926 pod_workers.go:191] Error syncing pod ec80e506-5c07-426a-96b5-39a19c3616de ("kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"
	Jan 08 21:30:36 old-k8s-version-211828 kubelet[926]: E0108 21:30:36.516306     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:41 old-k8s-version-211828 kubelet[926]: E0108 21:30:41.517137     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:46 old-k8s-version-211828 kubelet[926]: E0108 21:30:46.518015     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:48 old-k8s-version-211828 kubelet[926]: E0108 21:30:48.231223     926 pod_workers.go:191] Error syncing pod ec80e506-5c07-426a-96b5-39a19c3616de ("kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"
	Jan 08 21:30:51 old-k8s-version-211828 kubelet[926]: E0108 21:30:51.518848     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:56 old-k8s-version-211828 kubelet[926]: E0108 21:30:56.519588     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:31:01 old-k8s-version-211828 kubelet[926]: E0108 21:31:01.520401     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:31:02 old-k8s-version-211828 kubelet[926]: E0108 21:31:02.231423     926 pod_workers.go:191] Error syncing pod ec80e506-5c07-426a-96b5-39a19c3616de ("kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"
	Jan 08 21:31:06 old-k8s-version-211828 kubelet[926]: E0108 21:31:06.521134     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-211828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-5644d7b6d9-lm49s storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-211828 describe pod busybox coredns-5644d7b6d9-lm49s storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 describe pod busybox coredns-5644d7b6d9-lm49s storage-provisioner: exit status 1 (69.503383ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jz8cr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-jz8cr:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-jz8cr
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  8m2s                  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  6m54s (x1 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-lm49s" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-211828 describe pod busybox coredns-5644d7b6d9-lm49s storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-211828
helpers_test.go:235: (dbg) docker inspect old-k8s-version-211828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9",
	        "Created": "2023-01-08T21:18:34.933200191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:18:35.293925019Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hosts",
	        "LogPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9-json.log",
	        "Name": "/old-k8s-version-211828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-211828:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-211828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-211828",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-211828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-211828",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd7a2d331da5df8a5ad26b1a11ef8071062a8308e1e900de389b1fcbf053e8d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd7a2d331da5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-211828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f66150df9bfb",
	                        "old-k8s-version-211828"
	                    ],
	                    "NetworkID": "e48a739a7de53b0a2a21ddeaf3e573efe5bbf8c41c6a15cbe1e7c39d0f359d82",
	                    "EndpointID": "b0b05a18f751ba3ee859f73690ebd1a61bca7d47388946fae5701f1b0d051310",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:18 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| delete  | -p bridge-210619                                           | bridge-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| delete  | -p calico-210619                                           | calico-210619                | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	| start   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:20 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950                | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950                     | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:26 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-211950 sudo                                 | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:27:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:27:28.765231  268133 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:27:28.765330  268133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:27:28.765338  268133 out.go:309] Setting ErrFile to fd 2...
	I0108 21:27:28.765343  268133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:27:28.765439  268133 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:27:28.765964  268133 out.go:303] Setting JSON to false
	I0108 21:27:28.767361  268133 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4198,"bootTime":1673209051,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:27:28.767426  268133 start.go:135] virtualization: kvm guest
	I0108 21:27:28.770169  268133 out.go:177] * [newest-cni-212639] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:27:28.771838  268133 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:27:28.771763  268133 notify.go:220] Checking for updates...
	I0108 21:27:28.773655  268133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:27:28.775212  268133 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:27:28.776901  268133 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:27:28.778479  268133 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:27:28.780293  268133 config.go:180] Loaded profile config "newest-cni-212639": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:27:28.780707  268133 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:27:28.810535  268133 docker.go:137] docker version: linux-20.10.22
	I0108 21:27:28.810637  268133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:27:28.914574  268133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:27:28.831108877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:27:28.914685  268133 docker.go:254] overlay module found
	I0108 21:27:28.917116  268133 out.go:177] * Using the docker driver based on existing profile
	I0108 21:27:28.918812  268133 start.go:294] selected driver: docker
	I0108 21:27:28.918827  268133 start.go:838] validating driver "docker" against &{Name:newest-cni-212639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:27:28.918922  268133 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:27:28.919894  268133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:27:29.021048  268133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:27:28.941418387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:27:29.021321  268133 start_flags.go:929] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 21:27:29.021343  268133 cni.go:95] Creating CNI manager for ""
	I0108 21:27:29.021349  268133 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:27:29.021359  268133 start_flags.go:317] config:
	{Name:newest-cni-212639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:27:29.023686  268133 out.go:177] * Starting control plane node newest-cni-212639 in cluster newest-cni-212639
	I0108 21:27:29.025417  268133 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:27:29.027205  268133 out.go:177] * Pulling base image ...
	I0108 21:27:29.028641  268133 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:27:29.028684  268133 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:27:29.028697  268133 cache.go:57] Caching tarball of preloaded images
	I0108 21:27:29.028756  268133 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:27:29.028902  268133 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:27:29.028921  268133 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:27:29.029029  268133 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/config.json ...
	I0108 21:27:29.055564  268133 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:27:29.055586  268133 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:27:29.055611  268133 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:27:29.055663  268133 start.go:364] acquiring machines lock for newest-cni-212639: {Name:mkda646b62b7d9c9186158724cd7269b307eb11f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:27:29.055782  268133 start.go:368] acquired machines lock for "newest-cni-212639" in 80.336µs
	I0108 21:27:29.055807  268133 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:27:29.055814  268133 fix.go:55] fixHost starting: 
	I0108 21:27:29.056094  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:29.080961  268133 fix.go:103] recreateIfNeeded on newest-cni-212639: state=Stopped err=<nil>
	W0108 21:27:29.080991  268133 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:27:29.083655  268133 out.go:177] * Restarting existing docker container for "newest-cni-212639" ...
	I0108 21:27:29.085590  268133 cli_runner.go:164] Run: docker start newest-cni-212639
	I0108 21:27:29.466862  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:29.495174  268133 kic.go:415] container "newest-cni-212639" state is running.
	I0108 21:27:29.495620  268133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-212639
	I0108 21:27:29.520835  268133 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/config.json ...
	I0108 21:27:29.521069  268133 machine.go:88] provisioning docker machine ...
	I0108 21:27:29.521093  268133 ubuntu.go:169] provisioning hostname "newest-cni-212639"
	I0108 21:27:29.521172  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:29.548259  268133 main.go:134] libmachine: Using SSH client type: native
	I0108 21:27:29.548454  268133 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33042 <nil> <nil>}
	I0108 21:27:29.548474  268133 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-212639 && echo "newest-cni-212639" | sudo tee /etc/hostname
	I0108 21:27:29.549137  268133 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60116->127.0.0.1:33042: read: connection reset by peer
	I0108 21:27:32.679980  268133 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-212639
	
	I0108 21:27:32.680064  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:32.704822  268133 main.go:134] libmachine: Using SSH client type: native
	I0108 21:27:32.705004  268133 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33042 <nil> <nil>}
	I0108 21:27:32.705035  268133 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-212639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-212639/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-212639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:27:32.819191  268133 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:27:32.819229  268133 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:27:32.819265  268133 ubuntu.go:177] setting up certificates
	I0108 21:27:32.819276  268133 provision.go:83] configureAuth start
	I0108 21:27:32.819333  268133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-212639
	I0108 21:27:32.844313  268133 provision.go:138] copyHostCerts
	I0108 21:27:32.844376  268133 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:27:32.844396  268133 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:27:32.844470  268133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:27:32.844566  268133 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:27:32.844578  268133 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:27:32.844619  268133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:27:32.844692  268133 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:27:32.844702  268133 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:27:32.844737  268133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:27:32.844795  268133 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.newest-cni-212639 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-212639]
	I0108 21:27:33.024245  268133 provision.go:172] copyRemoteCerts
	I0108 21:27:33.024298  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:27:33.024333  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.049410  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.134845  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:27:33.156477  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:27:33.175248  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:27:33.192199  268133 provision.go:86] duration metric: configureAuth took 372.90657ms
	I0108 21:27:33.192222  268133 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:27:33.192381  268133 config.go:180] Loaded profile config "newest-cni-212639": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:27:33.192391  268133 machine.go:91] provisioned docker machine in 3.671308135s
	I0108 21:27:33.192397  268133 start.go:300] post-start starting for "newest-cni-212639" (driver="docker")
	I0108 21:27:33.192403  268133 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:27:33.192436  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:27:33.192465  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.217627  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.303289  268133 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:27:33.306127  268133 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:27:33.306149  268133 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:27:33.306158  268133 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:27:33.306163  268133 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:27:33.306171  268133 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:27:33.306212  268133 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:27:33.306273  268133 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:27:33.306347  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:27:33.314075  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:27:33.331036  268133 start.go:303] post-start completed in 138.628109ms
	I0108 21:27:33.331098  268133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:27:33.331127  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.356822  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.440301  268133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:27:33.444254  268133 fix.go:57] fixHost completed within 4.388434416s
	I0108 21:27:33.444285  268133 start.go:83] releasing machines lock for "newest-cni-212639", held for 4.388486617s
	I0108 21:27:33.444379  268133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-212639
	I0108 21:27:33.469347  268133 ssh_runner.go:195] Run: cat /version.json
	I0108 21:27:33.469403  268133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:27:33.469413  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.469471  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:33.494776  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.498368  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:33.605894  268133 ssh_runner.go:195] Run: systemctl --version
	I0108 21:27:33.609960  268133 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:27:33.621087  268133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:27:33.630592  268133 docker.go:189] disabling docker service ...
	I0108 21:27:33.630638  268133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:27:33.640514  268133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:27:33.650036  268133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:27:33.726594  268133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:27:33.798762  268133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:27:33.807686  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:27:33.820033  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:27:33.827874  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:27:33.835458  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:27:33.842952  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:27:33.850460  268133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:27:33.856397  268133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:27:33.862289  268133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:27:33.939123  268133 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:27:34.004820  268133 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:27:34.004891  268133 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:27:34.008336  268133 start.go:472] Will wait 60s for crictl version
	I0108 21:27:34.008383  268133 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:27:34.034919  268133 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:27:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:27:45.082906  268133 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:27:45.105678  268133 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:27:45.105733  268133 ssh_runner.go:195] Run: containerd --version
	I0108 21:27:45.129490  268133 ssh_runner.go:195] Run: containerd --version
	I0108 21:27:45.154616  268133 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:27:45.156205  268133 cli_runner.go:164] Run: docker network inspect newest-cni-212639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:27:45.179618  268133 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0108 21:27:45.182793  268133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:45.194213  268133 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0108 21:27:45.195893  268133 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:27:45.195991  268133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:27:45.219527  268133 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:27:45.219551  268133 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:27:45.219596  268133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:27:45.244817  268133 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:27:45.244842  268133 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:27:45.244912  268133 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:27:45.269367  268133 cni.go:95] Creating CNI manager for ""
	I0108 21:27:45.269386  268133 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:27:45.269398  268133 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0108 21:27:45.269413  268133 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-212639 NodeName:newest-cni-212639 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feat
ureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:27:45.269552  268133 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-212639"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:27:45.269652  268133 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-212639 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:27:45.269696  268133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:27:45.276883  268133 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:27:45.276956  268133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:27:45.283579  268133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (547 bytes)
	I0108 21:27:45.295934  268133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:27:45.309249  268133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0108 21:27:45.322187  268133 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:27:45.325139  268133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:45.334399  268133 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639 for IP: 192.168.94.2
	I0108 21:27:45.334513  268133 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:27:45.334548  268133 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:27:45.334607  268133 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/client.key
	I0108 21:27:45.334656  268133 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/apiserver.key.ad8e880a
	I0108 21:27:45.334687  268133 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/proxy-client.key
	I0108 21:27:45.334779  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:27:45.334804  268133 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:27:45.334815  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:27:45.334838  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:27:45.334859  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:27:45.334883  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:27:45.334918  268133 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:27:45.335569  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:27:45.353098  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:27:45.369628  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:27:45.387573  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/newest-cni-212639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:27:45.405221  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:27:45.422744  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:27:45.439330  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:27:45.456759  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:27:45.473190  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:27:45.489674  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:27:45.507995  268133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:27:45.524758  268133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:27:45.537439  268133 ssh_runner.go:195] Run: openssl version
	I0108 21:27:45.542325  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:27:45.549667  268133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:45.552593  268133 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:45.552630  268133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:45.557399  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:27:45.564013  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:27:45.570969  268133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:27:45.573973  268133 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:27:45.574010  268133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:27:45.578719  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:27:45.585541  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:27:45.592761  268133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:27:45.595725  268133 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:27:45.595771  268133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:27:45.601221  268133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:27:45.607901  268133 kubeadm.go:396] StartCluster: {Name:newest-cni-212639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-212639 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:27:45.608005  268133 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:27:45.608052  268133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:27:45.633536  268133 cri.go:87] found id: "78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37"
	I0108 21:27:45.633570  268133 cri.go:87] found id: "1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430"
	I0108 21:27:45.633583  268133 cri.go:87] found id: "95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e"
	I0108 21:27:45.633593  268133 cri.go:87] found id: "b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2"
	I0108 21:27:45.633606  268133 cri.go:87] found id: "da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b"
	I0108 21:27:45.633617  268133 cri.go:87] found id: "9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47"
	I0108 21:27:45.633626  268133 cri.go:87] found id: ""
	I0108 21:27:45.633669  268133 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:27:45.644525  268133 cri.go:114] JSON = null
	W0108 21:27:45.644569  268133 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:27:45.644606  268133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:27:45.651264  268133 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:27:45.651283  268133 kubeadm.go:627] restartCluster start
	I0108 21:27:45.651318  268133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:27:45.657485  268133 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:45.658187  268133 kubeconfig.go:135] verify returned: extract IP: "newest-cni-212639" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:27:45.658612  268133 kubeconfig.go:146] "newest-cni-212639" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:27:45.659221  268133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:45.660667  268133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:27:45.666838  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:45.666908  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:45.674578  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:45.874963  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:45.875029  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:45.883431  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.074677  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.074754  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.083708  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.274982  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.275079  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.283693  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.474959  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.475029  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.483304  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.675620  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.675725  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.684152  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:46.875455  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:46.875561  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:46.884037  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.075372  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.075460  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.084339  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.275632  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.275732  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.284201  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.475551  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.475642  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.484190  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.675533  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.675634  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.684085  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:47.875368  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:47.875467  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:47.884031  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.075330  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.075405  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.084246  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.275538  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.275627  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.283816  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.475063  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.475141  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.483695  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.675026  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.675107  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.683990  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.684014  268133 api_server.go:165] Checking apiserver status ...
	I0108 21:27:48.684049  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:27:48.692348  268133 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.692387  268133 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:27:48.692395  268133 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:27:48.692407  268133 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:27:48.692452  268133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:27:48.716223  268133 cri.go:87] found id: "78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37"
	I0108 21:27:48.716249  268133 cri.go:87] found id: "1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430"
	I0108 21:27:48.716255  268133 cri.go:87] found id: "95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e"
	I0108 21:27:48.716261  268133 cri.go:87] found id: "b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2"
	I0108 21:27:48.716267  268133 cri.go:87] found id: "da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b"
	I0108 21:27:48.716280  268133 cri.go:87] found id: "9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47"
	I0108 21:27:48.716290  268133 cri.go:87] found id: ""
	I0108 21:27:48.716300  268133 cri.go:232] Stopping containers: [78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37 1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430 95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2 da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b 9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47]
	I0108 21:27:48.716363  268133 ssh_runner.go:195] Run: which crictl
	I0108 21:27:48.719338  268133 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 78b701fbfc96f080824c257d20a49fe230b1eb5ca9370f534ff782908c77cf37 1d25e6a5a7dea83f5e63bcafd8eb05c2d27fee85fbd87c58aca021bc8f201430 95140a27d84ddbc15ae0a11f58614b689c8848aee9bc7fa466f8fca6c5bf7d8e b9a2bb0dcb7ae675901085e39942b017b75f9ae2eda52d93f49c5d03274061d2 da92be5f72043d327d813ac6b780fc65927384a0b7de1840da48375726eed05b 9311bf1fab69e6ab9a390d402eb197e8863ed8e5a3d87c6f80aa4b5fbde84c47
	I0108 21:27:48.745901  268133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:27:48.755804  268133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:27:48.762868  268133 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 21:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:27 /etc/kubernetes/scheduler.conf
	
	I0108 21:27:48.762934  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:27:48.770114  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:27:48.776890  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:27:48.783617  268133 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.783684  268133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:27:48.790230  268133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:27:48.797262  268133 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:27:48.797322  268133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:27:48.803754  268133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:27:48.810579  268133 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:27:48.810604  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:48.854700  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:49.955777  268133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100955115s)
	I0108 21:27:49.955819  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:50.098599  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:50.149752  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:50.243811  268133 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:27:50.243870  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:50.754110  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:51.254107  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:51.320481  268133 api_server.go:71] duration metric: took 1.076675567s to wait for apiserver process to appear ...
	I0108 21:27:51.320509  268133 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:27:51.320518  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:51.320879  268133 api_server.go:268] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0108 21:27:51.821576  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:54.685150  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:27:54.685191  268133 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:27:54.821404  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:54.828317  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:27:54.828346  268133 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:27:55.320971  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:55.325370  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:27:55.325392  268133 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:27:55.820972  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:55.825305  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:27:55.830801  268133 api_server.go:140] control plane version: v1.25.3
	I0108 21:27:55.830824  268133 api_server.go:130] duration metric: took 4.510310193s to wait for apiserver health ...
	I0108 21:27:55.830833  268133 cni.go:95] Creating CNI manager for ""
	I0108 21:27:55.830842  268133 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:27:55.833222  268133 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:27:55.835019  268133 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:27:55.839173  268133 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:27:55.839213  268133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:27:55.852687  268133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:27:56.681804  268133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:27:56.691379  268133 system_pods.go:59] 9 kube-system pods found
	I0108 21:27:56.691412  268133 system_pods.go:61] "coredns-565d847f94-jlgss" [383bbf49-200e-4180-9174-07b6e59ff237] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.691438  268133 system_pods.go:61] "etcd-newest-cni-212639" [0b37c532-d886-462c-955b-a24131a077f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:27:56.691452  268133 system_pods.go:61] "kindnet-b2t2w" [01142b5d-96e1-480a-98f6-4e7f5f90cc73] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:27:56.691461  268133 system_pods.go:61] "kube-apiserver-newest-cni-212639" [ca27e73e-9789-4186-9761-5d5c9c077bf0] Running
	I0108 21:27:56.691490  268133 system_pods.go:61] "kube-controller-manager-newest-cni-212639" [1e0c0317-02ac-4013-b05a-2ece64459b36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:27:56.691499  268133 system_pods.go:61] "kube-proxy-9dkpd" [6c10127e-fb6d-479c-9f1a-002abe670b1d] Running
	I0108 21:27:56.691508  268133 system_pods.go:61] "kube-scheduler-newest-cni-212639" [1b487550-9f18-460e-816c-ddcbf1d8ff5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:27:56.691519  268133 system_pods.go:61] "metrics-server-5c8fd5cf8-zn4gr" [9713d296-cf3d-40f3-b710-08eaf7d22988] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.691533  268133 system_pods.go:61] "storage-provisioner" [6f8b2d5a-0a17-461f-891e-dd4c4ccc7006] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.691543  268133 system_pods.go:74] duration metric: took 9.717294ms to wait for pod list to return data ...
	I0108 21:27:56.691556  268133 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:27:56.696614  268133 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:27:56.696637  268133 node_conditions.go:123] node cpu capacity is 8
	I0108 21:27:56.696647  268133 node_conditions.go:105] duration metric: took 5.086053ms to run NodePressure ...
	I0108 21:27:56.696663  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:27:56.828302  268133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:27:56.835024  268133 ops.go:34] apiserver oom_adj: -16
	I0108 21:27:56.835047  268133 kubeadm.go:631] restartCluster took 11.183758492s
	I0108 21:27:56.835057  268133 kubeadm.go:398] StartCluster complete in 11.227163302s
	I0108 21:27:56.835074  268133 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:56.835156  268133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:27:56.836399  268133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:56.840086  268133 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-212639" rescaled to 1
	I0108 21:27:56.840139  268133 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:27:56.842342  268133 out.go:177] * Verifying Kubernetes components...
	I0108 21:27:56.840188  268133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:27:56.840203  268133 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0108 21:27:56.840364  268133 config.go:180] Loaded profile config "newest-cni-212639": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:27:56.844062  268133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:27:56.844103  268133 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-212639"
	I0108 21:27:56.844111  268133 addons.go:65] Setting default-storageclass=true in profile "newest-cni-212639"
	I0108 21:27:56.844126  268133 addons.go:65] Setting metrics-server=true in profile "newest-cni-212639"
	I0108 21:27:56.844127  268133 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-212639"
	I0108 21:27:56.844131  268133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-212639"
	W0108 21:27:56.844139  268133 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:27:56.844142  268133 addons.go:227] Setting addon metrics-server=true in "newest-cni-212639"
	I0108 21:27:56.844152  268133 addons.go:65] Setting dashboard=true in profile "newest-cni-212639"
	W0108 21:27:56.844159  268133 addons.go:236] addon metrics-server should already be in state true
	I0108 21:27:56.844169  268133 addons.go:227] Setting addon dashboard=true in "newest-cni-212639"
	W0108 21:27:56.844176  268133 addons.go:236] addon dashboard should already be in state true
	I0108 21:27:56.844186  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.844190  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.844217  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.844457  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.844614  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.844642  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.844615  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.856799  268133 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:27:56.856857  268133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:27:56.893927  268133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:27:56.895774  268133 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:27:56.897447  268133 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:27:56.899092  268133 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:27:56.900745  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:27:56.900759  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:27:56.900805  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.899109  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:27:56.902469  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.904190  268133 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:27:56.906038  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:27:56.906056  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:27:56.906095  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.913846  268133 addons.go:227] Setting addon default-storageclass=true in "newest-cni-212639"
	W0108 21:27:56.913869  268133 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:27:56.913898  268133 host.go:66] Checking if "newest-cni-212639" exists ...
	I0108 21:27:56.914336  268133 cli_runner.go:164] Run: docker container inspect newest-cni-212639 --format={{.State.Status}}
	I0108 21:27:56.940644  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.944505  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.950971  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.956692  268133 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:27:56.956715  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:27:56.956765  268133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-212639
	I0108 21:27:56.970097  268133 api_server.go:71] duration metric: took 129.918632ms to wait for apiserver process to appear ...
	I0108 21:27:56.970122  268133 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:27:56.970131  268133 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0108 21:27:56.971086  268133 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:27:56.976505  268133 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0108 21:27:56.977444  268133 api_server.go:140] control plane version: v1.25.3
	I0108 21:27:56.977502  268133 api_server.go:130] duration metric: took 7.373609ms to wait for apiserver health ...
	I0108 21:27:56.977524  268133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:27:56.984004  268133 system_pods.go:59] 9 kube-system pods found
	I0108 21:27:56.984038  268133 system_pods.go:61] "coredns-565d847f94-jlgss" [383bbf49-200e-4180-9174-07b6e59ff237] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.984051  268133 system_pods.go:61] "etcd-newest-cni-212639" [0b37c532-d886-462c-955b-a24131a077f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:27:56.984062  268133 system_pods.go:61] "kindnet-b2t2w" [01142b5d-96e1-480a-98f6-4e7f5f90cc73] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:27:56.984071  268133 system_pods.go:61] "kube-apiserver-newest-cni-212639" [ca27e73e-9789-4186-9761-5d5c9c077bf0] Running
	I0108 21:27:56.984081  268133 system_pods.go:61] "kube-controller-manager-newest-cni-212639" [1e0c0317-02ac-4013-b05a-2ece64459b36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:27:56.984092  268133 system_pods.go:61] "kube-proxy-9dkpd" [6c10127e-fb6d-479c-9f1a-002abe670b1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:27:56.984101  268133 system_pods.go:61] "kube-scheduler-newest-cni-212639" [1b487550-9f18-460e-816c-ddcbf1d8ff5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:27:56.984110  268133 system_pods.go:61] "metrics-server-5c8fd5cf8-zn4gr" [9713d296-cf3d-40f3-b710-08eaf7d22988] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.984118  268133 system_pods.go:61] "storage-provisioner" [6f8b2d5a-0a17-461f-891e-dd4c4ccc7006] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:27:56.984125  268133 system_pods.go:74] duration metric: took 6.587949ms to wait for pod list to return data ...
	I0108 21:27:56.984135  268133 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:27:56.986465  268133 default_sa.go:45] found service account: "default"
	I0108 21:27:56.986481  268133 default_sa.go:55] duration metric: took 2.340833ms for default service account to be created ...
	I0108 21:27:56.986492  268133 kubeadm.go:573] duration metric: took 146.318184ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0108 21:27:56.986511  268133 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:27:56.987573  268133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33042 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/newest-cni-212639/id_rsa Username:docker}
	I0108 21:27:56.988961  268133 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:27:56.988981  268133 node_conditions.go:123] node cpu capacity is 8
	I0108 21:27:56.988989  268133 node_conditions.go:105] duration metric: took 2.473937ms to run NodePressure ...
	I0108 21:27:56.988999  268133 start.go:217] waiting for startup goroutines ...
	I0108 21:27:57.042655  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:27:57.042675  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:27:57.049738  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:27:57.057234  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:27:57.057257  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:27:57.066234  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:27:57.066261  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:27:57.071993  268133 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:27:57.072018  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:27:57.111308  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:27:57.111338  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:27:57.120602  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:27:57.126801  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:27:57.127797  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:27:57.127818  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:27:57.144715  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:27:57.144799  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:27:57.223994  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:27:57.224018  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:27:57.243113  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:27:57.243141  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:27:57.325619  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:27:57.325649  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:27:57.343985  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:27:57.344014  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:27:57.429266  268133 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:27:57.429292  268133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:27:57.445559  268133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:27:57.742586  268133 addons.go:457] Verifying addon metrics-server=true in "newest-cni-212639"
	I0108 21:27:58.012564  268133 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-212639 addons enable metrics-server	
	
	
	I0108 21:27:58.014398  268133 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0108 21:27:58.016196  268133 addons.go:488] enableAddons completed in 1.17599155s
	I0108 21:27:58.016563  268133 ssh_runner.go:195] Run: rm -f paused
	I0108 21:27:58.072101  268133 start.go:536] kubectl: 1.26.0, cluster: 1.25.3 (minor skew: 1)
	I0108 21:27:58.074305  268133 out.go:177] * Done! kubectl is now configured to use "newest-cni-212639" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ed44b6cd92e88       d6e3e26021b60       3 minutes ago       Exited              kindnet-cni               3                   156951a7e6ad9
	4fdaee2b10f29       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   700fdf969a65f
	a9e20d8377a66       b2756210eeabf       12 minutes ago      Running             etcd                      0                   3177f12cbcc92
	3baeebbc6da60       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   0c6dba6ffda90
	dc587e05c9875       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   6963fcc252763
	18030e6256a0f       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   40f53ffcd3927
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:18:35 UTC, end at Sun 2023-01-08 21:31:11 UTC. --
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.353283530Z" level=warning msg="cleaning up after shim disconnected" id=a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d namespace=k8s.io
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.353297928Z" level=info msg="cleaning up dead shim"
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.362007670Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:24:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3162 runtime=io.containerd.runc.v2\n"
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.796218196Z" level=info msg="RemoveContainer for \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\""
	Jan 08 21:24:31 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:31.803024051Z" level=info msg="RemoveContainer for \"574f15edf833175e912660c2f5c10a57435ef520281471547e15dedce5a8781a\" returns successfully"
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.233533359Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.246607491Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\""
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.247163693Z" level=info msg="StartContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\""
	Jan 08 21:24:44 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:24:44.326500700Z" level=info msg="StartContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\" returns successfully"
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.864600123Z" level=info msg="shim disconnected" id=e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.864655647Z" level=warning msg="cleaning up after shim disconnected" id=e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db namespace=k8s.io
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.864665974Z" level=info msg="cleaning up dead shim"
	Jan 08 21:27:24 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:24.873334658Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:27:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3639 runtime=io.containerd.runc.v2\n"
	Jan 08 21:27:25 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:25.050837242Z" level=info msg="RemoveContainer for \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\""
	Jan 08 21:27:25 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:25.060940860Z" level=info msg="RemoveContainer for \"a3a1060e1346768567a9ea4fb7fb7b0012cc8417fe5011dd546dd1255ed49b4d\" returns successfully"
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.239117101Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.252621250Z" level=info msg="CreateContainer within sandbox \"156951a7e6ad93a05e095bff14d2097ddbf5a7bcfa8469c08b265cf49b68920b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a\""
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.253259212Z" level=info msg="StartContainer for \"ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a\""
	Jan 08 21:27:53 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:27:53.329266240Z" level=info msg="StartContainer for \"ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a\" returns successfully"
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.953098822Z" level=info msg="shim disconnected" id=ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.953169572Z" level=warning msg="cleaning up after shim disconnected" id=ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a namespace=k8s.io
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.953188398Z" level=info msg="cleaning up dead shim"
	Jan 08 21:30:33 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:33.962418312Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4128 runtime=io.containerd.runc.v2\n"
	Jan 08 21:30:34 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:34.319837538Z" level=info msg="RemoveContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\""
	Jan 08 21:30:34 old-k8s-version-211828 containerd[512]: time="2023-01-08T21:30:34.324704126Z" level=info msg="RemoveContainer for \"e7d77d623de4f297853eafcb517d6e564f9dbe8ed547bd9dd3d95905094907db\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-211828
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-211828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=old-k8s-version-211828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_18_51_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:18:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:30:16 +0000   Sun, 08 Jan 2023 21:18:42 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-211828
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	System Info:
	 Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	 System UUID:                a9413ae7-d165-4b76-a22b-73b89e3e2d6a
	 Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	 Kernel Version:             5.15.0-1025-gcp
	 OS Image:                   Ubuntu 20.04.5 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-211828                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-9z2n8                                     100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-211828             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-211828    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-jqh6r                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-211828             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-211828  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70] <==
	* 2023-01-08 21:18:42.222758 I | raft: ea7e25599daad906 became follower at term 1
	2023-01-08 21:18:42.230174 W | auth: simple token is not cryptographically signed
	2023-01-08 21:18:42.233023 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-01-08 21:18:42.234706 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-01-08 21:18:42.234820 I | embed: listening for metrics on http://192.168.76.2:2381
	2023-01-08 21:18:42.235027 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-01-08 21:18:42.235302 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-01-08 21:18:42.236050 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:18:42.823124 I | raft: ea7e25599daad906 is starting a new election at term 1
	2023-01-08 21:18:42.823163 I | raft: ea7e25599daad906 became candidate at term 2
	2023-01-08 21:18:42.823187 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2023-01-08 21:18:42.823199 I | raft: ea7e25599daad906 became leader at term 2
	2023-01-08 21:18:42.823207 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2023-01-08 21:18:42.823565 I | etcdserver: setting up the initial cluster version to 3.3
	2023-01-08 21:18:42.823593 I | etcdserver: published {Name:old-k8s-version-211828 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:18:42.823608 I | embed: ready to serve client requests
	2023-01-08 21:18:42.823618 I | embed: ready to serve client requests
	2023-01-08 21:18:42.824159 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-01-08 21:18:42.824619 I | etcdserver/api: enabled capabilities for version 3.3
	2023-01-08 21:18:42.825884 I | embed: serving client requests on 192.168.76.2:2379
	2023-01-08 21:18:42.826236 I | embed: serving client requests on 127.0.0.1:2379
	2023-01-08 21:19:59.361725 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (117.600264ms) to execute
	2023-01-08 21:20:00.632412 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (349.431998ms) to execute
	2023-01-08 21:28:42.839663 I | mvcc: store.index: compact 457
	2023-01-08 21:28:42.840457 I | mvcc: finished scheduled compaction at 457 (took 474.586µs)
	
	* 
	* ==> kernel <==
	*  21:31:11 up  1:13,  0 users,  load average: 0.49, 0.66, 1.21
	Linux old-k8s-version-211828 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4] <==
	* I0108 21:18:45.832401       1 naming_controller.go:288] Starting NamingConditionController
	I0108 21:18:45.832488       1 establishing_controller.go:73] Starting EstablishingController
	I0108 21:18:45.832179       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0108 21:18:45.833021       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 21:18:45.931842       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:18:45.932088       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:18:45.932770       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0108 21:18:45.932802       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:18:46.831581       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0108 21:18:46.831614       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 21:18:46.831759       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:18:46.835235       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0108 21:18:46.838488       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:18:46.838509       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0108 21:18:47.618962       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:18:48.612611       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:18:48.892810       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 21:18:49.229017       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0108 21:18:49.229650       1 controller.go:606] quota admission added evaluator for: endpoints
	I0108 21:18:50.129578       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0108 21:18:50.537710       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0108 21:18:50.897501       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:05.581835       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:05.598947       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0108 21:19:05.761910       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d] <==
	* I0108 21:19:05.532786       1 shared_informer.go:204] Caches are synced for HPA 
	I0108 21:19:05.577838       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0108 21:19:05.583510       1 shared_informer.go:204] Caches are synced for stateful set 
	I0108 21:19:05.589186       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a0f32b10-75af-4660-85eb-9e2d60222d15", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-9z2n8
	I0108 21:19:05.591195       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"5562d924-d3c2-495e-8160-7930ac4bed98", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-jqh6r
	E0108 21:19:05.603944       1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"a0f32b10-75af-4660-85eb-9e2d60222d15", ResourceVersion:"226", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808809531, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20221004-44d545d1\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerati
ons\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001002e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:
[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.Vsphere
VirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ec0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolume
Source)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002ee0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)
(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.
Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20221004-44d545d1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002f00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002f40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011562d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.Eph
emeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0007111e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011652c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.Resou
rceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00013c870)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000711260)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0108 21:19:05.611711       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"5562d924-d3c2-495e-8160-7930ac4bed98", ResourceVersion:"214", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808809530, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001002da0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a2a980), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001002de0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001002e20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001156140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000710ed8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001165260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00013c868)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000710f18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0108 21:19:05.726906       1 shared_informer.go:204] Caches are synced for disruption 
	I0108 21:19:05.726931       1 disruption.go:341] Sending events to api server.
	I0108 21:19:05.734015       1 shared_informer.go:204] Caches are synced for resource quota 
	I0108 21:19:05.759951       1 shared_informer.go:204] Caches are synced for deployment 
	I0108 21:19:05.764185       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2341f665-8f22-48e3-9b76-dbd488b1235d", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0108 21:19:05.775680       1 shared_informer.go:204] Caches are synced for resource quota 
	I0108 21:19:05.783488       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0108 21:19:05.787135       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"993e0e3b-0673-4494-853e-0ee4024d61de", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-lm49s
	I0108 21:19:05.788779       1 shared_informer.go:204] Caches are synced for expand 
	I0108 21:19:05.788903       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0108 21:19:05.788923       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:05.803010       1 shared_informer.go:204] Caches are synced for certificate 
	I0108 21:19:05.807809       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0108 21:19:05.833561       1 shared_informer.go:204] Caches are synced for certificate 
	I0108 21:19:05.834019       1 shared_informer.go:204] Caches are synced for attach detach 
	I0108 21:19:05.835463       1 shared_informer.go:204] Caches are synced for PV protection 
	I0108 21:19:05.839437       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0108 21:19:05.850779       1 log.go:172] [INFO] signed certificate with serial number 477651019640136324065142830251145268032180874070
	
	* 
	* ==> kube-proxy [4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25] <==
	* W0108 21:19:06.244257       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 21:19:06.253650       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0108 21:19:06.253708       1 server_others.go:149] Using iptables Proxier.
	I0108 21:19:06.254406       1 server.go:529] Version: v1.16.0
	I0108 21:19:06.255737       1 config.go:131] Starting endpoints config controller
	I0108 21:19:06.255772       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 21:19:06.255807       1 config.go:313] Starting service config controller
	I0108 21:19:06.255831       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 21:19:06.409897       1 shared_informer.go:204] Caches are synced for service config 
	I0108 21:19:06.409933       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330] <==
	* I0108 21:18:45.921798       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0108 21:18:46.015772       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:46.016363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:46.017243       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:46.017333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:46.017720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:46.017731       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:46.017872       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:46.018105       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:46.019057       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:46.019300       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:46.020625       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:18:47.017213       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:18:47.018225       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:18:47.020195       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:18:47.020884       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:18:47.021712       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:18:47.022941       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:18:47.023944       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:18:47.024799       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:47.028700       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:18:47.029806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:18:47.031840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:19:06.775194       1 factory.go:585] pod is already present in the activeQ
	E0108 21:23:08.286026       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:18:35 UTC, end at Sun 2023-01-08 21:31:12 UTC. --
	Jan 08 21:29:26 old-k8s-version-211828 kubelet[926]: E0108 21:29:26.505975     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:31 old-k8s-version-211828 kubelet[926]: E0108 21:29:31.506682     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:36 old-k8s-version-211828 kubelet[926]: E0108 21:29:36.507411     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:41 old-k8s-version-211828 kubelet[926]: E0108 21:29:41.508086     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:46 old-k8s-version-211828 kubelet[926]: E0108 21:29:46.508835     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:51 old-k8s-version-211828 kubelet[926]: E0108 21:29:51.509584     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:29:56 old-k8s-version-211828 kubelet[926]: E0108 21:29:56.510253     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:01 old-k8s-version-211828 kubelet[926]: E0108 21:30:01.511089     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:06 old-k8s-version-211828 kubelet[926]: E0108 21:30:06.511826     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:11 old-k8s-version-211828 kubelet[926]: E0108 21:30:11.512567     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:16 old-k8s-version-211828 kubelet[926]: E0108 21:30:16.513377     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:21 old-k8s-version-211828 kubelet[926]: E0108 21:30:21.514071     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:26 old-k8s-version-211828 kubelet[926]: E0108 21:30:26.514822     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:31 old-k8s-version-211828 kubelet[926]: E0108 21:30:31.515598     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:34 old-k8s-version-211828 kubelet[926]: E0108 21:30:34.319861     926 pod_workers.go:191] Error syncing pod ec80e506-5c07-426a-96b5-39a19c3616de ("kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"
	Jan 08 21:30:36 old-k8s-version-211828 kubelet[926]: E0108 21:30:36.516306     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:41 old-k8s-version-211828 kubelet[926]: E0108 21:30:41.517137     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:46 old-k8s-version-211828 kubelet[926]: E0108 21:30:46.518015     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:48 old-k8s-version-211828 kubelet[926]: E0108 21:30:48.231223     926 pod_workers.go:191] Error syncing pod ec80e506-5c07-426a-96b5-39a19c3616de ("kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"
	Jan 08 21:30:51 old-k8s-version-211828 kubelet[926]: E0108 21:30:51.518848     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:30:56 old-k8s-version-211828 kubelet[926]: E0108 21:30:56.519588     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:31:01 old-k8s-version-211828 kubelet[926]: E0108 21:31:01.520401     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:31:02 old-k8s-version-211828 kubelet[926]: E0108 21:31:02.231423     926 pod_workers.go:191] Error syncing pod ec80e506-5c07-426a-96b5-39a19c3616de ("kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-9z2n8_kube-system(ec80e506-5c07-426a-96b5-39a19c3616de)"
	Jan 08 21:31:06 old-k8s-version-211828 kubelet[926]: E0108 21:31:06.521134     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:31:11 old-k8s-version-211828 kubelet[926]: E0108 21:31:11.521923     926 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-211828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-5644d7b6d9-lm49s storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-211828 describe pod busybox coredns-5644d7b6d9-lm49s storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 describe pod busybox coredns-5644d7b6d9-lm49s storage-provisioner: exit status 1 (65.970503ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jz8cr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-jz8cr:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-jz8cr
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  8m4s                  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  6m56s (x1 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-lm49s" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-211828 describe pod busybox coredns-5644d7b6d9-lm49s storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (484.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (484.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-211859 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [bc03c738-116e-47f3-b657-198a77891c22] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0108 21:23:41.810612   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:46.931633   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:23:57.172314   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:24:15.962864   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:24:17.652758   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/no-preload/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
start_stop_delete_test.go:196: TestStartStop/group/no-preload/serial/DeployApp: showing logs for failed pods as of 2023-01-08 21:31:41.343998268 +0000 UTC m=+3863.487170559
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-211859 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context no-preload-211859 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9txc5 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-9txc5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  2m46s (x2 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-211859 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context no-preload-211859 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-211859
helpers_test.go:235: (dbg) docker inspect no-preload-211859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65",
	        "Created": "2023-01-08T21:19:00.370984432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:19:00.742893962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hosts",
	        "LogPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65-json.log",
	        "Name": "/no-preload-211859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-211859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-211859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-211859",
	                "Source": "/var/lib/docker/volumes/no-preload-211859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-211859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-211859",
	                "name.minikube.sigs.k8s.io": "no-preload-211859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6412d705758b0fa3708816e7c5f6b0b6bfa26c10bbbc6e3acea6f602d9c2dab3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6412d705758b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-211859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "23cabd631389",
	                        "no-preload-211859"
	                    ],
	                    "NetworkID": "f6ac14d41355072c0829af36f4aed661fe422e2af93237ea348f6b100ade02e6",
	                    "EndpointID": "2f14131c7e47074512e155979b67d1e3a5303bb55db398f44880c21804eebda9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-211859 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950                | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950                     | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:26 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-211950 sudo                                 | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:31:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:31:14.786818  274657 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:14.787251  274657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:14.787265  274657 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:14.787272  274657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:14.787427  274657 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:31:14.788057  274657 out.go:303] Setting JSON to false
	I0108 21:31:14.789452  274657 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4424,"bootTime":1673209051,"procs":560,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:14.789560  274657 start.go:135] virtualization: kvm guest
	I0108 21:31:14.792273  274657 out.go:177] * [old-k8s-version-211828] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:14.793736  274657 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:31:14.793706  274657 notify.go:220] Checking for updates...
	I0108 21:31:14.796380  274657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:14.797863  274657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:14.799587  274657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:31:14.801298  274657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:14.803317  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:31:14.805219  274657 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0108 21:31:14.806495  274657 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:31:14.836588  274657 docker.go:137] docker version: linux-20.10.22
	I0108 21:31:14.836697  274657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:14.935102  274657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:14.857932215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:14.935250  274657 docker.go:254] overlay module found
	I0108 21:31:14.937603  274657 out.go:177] * Using the docker driver based on existing profile
	I0108 21:31:14.939308  274657 start.go:294] selected driver: docker
	I0108 21:31:14.939320  274657 start.go:838] validating driver "docker" against &{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:14.939425  274657 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:14.940295  274657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:15.037391  274657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:14.960690951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:15.037661  274657 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:15.037690  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:15.037701  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:15.037727  274657 start_flags.go:317] config:
	{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:15.040192  274657 out.go:177] * Starting control plane node old-k8s-version-211828 in cluster old-k8s-version-211828
	I0108 21:31:15.041641  274657 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:31:15.043001  274657 out.go:177] * Pulling base image ...
	I0108 21:31:15.044447  274657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:31:15.044499  274657 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 21:31:15.044507  274657 cache.go:57] Caching tarball of preloaded images
	I0108 21:31:15.044542  274657 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:31:15.044751  274657 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:31:15.044768  274657 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0108 21:31:15.044879  274657 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:31:15.070621  274657 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:31:15.070646  274657 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:31:15.070659  274657 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:31:15.070696  274657 start.go:364] acquiring machines lock for old-k8s-version-211828: {Name:mk7415b788fbdcf6791633774a550ddef2131776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:15.070786  274657 start.go:368] acquired machines lock for "old-k8s-version-211828" in 67.237µs
	I0108 21:31:15.070803  274657 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:15.070808  274657 fix.go:55] fixHost starting: 
	I0108 21:31:15.071007  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:31:15.094712  274657 fix.go:103] recreateIfNeeded on old-k8s-version-211828: state=Stopped err=<nil>
	W0108 21:31:15.094743  274657 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:31:15.097062  274657 out.go:177] * Restarting existing docker container for "old-k8s-version-211828" ...
	I0108 21:31:15.098676  274657 cli_runner.go:164] Run: docker start old-k8s-version-211828
	I0108 21:31:15.451736  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:31:15.477931  274657 kic.go:415] container "old-k8s-version-211828" state is running.
	I0108 21:31:15.478259  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:15.502791  274657 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:31:15.503068  274657 machine.go:88] provisioning docker machine ...
	I0108 21:31:15.503092  274657 ubuntu.go:169] provisioning hostname "old-k8s-version-211828"
	I0108 21:31:15.503141  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:15.527135  274657 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:15.527388  274657 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0108 21:31:15.527414  274657 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-211828 && echo "old-k8s-version-211828" | sudo tee /etc/hostname
	I0108 21:31:15.528154  274657 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49092->127.0.0.1:33047: read: connection reset by peer
	I0108 21:31:18.652158  274657 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-211828
	
	I0108 21:31:18.652235  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:18.677352  274657 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:18.677632  274657 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0108 21:31:18.677662  274657 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-211828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-211828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-211828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:31:18.791306  274657 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:31:18.791338  274657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:31:18.791356  274657 ubuntu.go:177] setting up certificates
	I0108 21:31:18.791364  274657 provision.go:83] configureAuth start
	I0108 21:31:18.791407  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:18.815953  274657 provision.go:138] copyHostCerts
	I0108 21:31:18.816006  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:31:18.816012  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:31:18.816081  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:31:18.816177  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:31:18.816185  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:31:18.816212  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:31:18.816273  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:31:18.816281  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:31:18.816304  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:31:18.816348  274657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-211828 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-211828]
	I0108 21:31:18.931118  274657 provision.go:172] copyRemoteCerts
	I0108 21:31:18.931183  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:31:18.931217  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:18.955719  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.042817  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:31:19.060612  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:31:19.077223  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:31:19.093605  274657 provision.go:86] duration metric: configureAuth took 302.219123ms
	I0108 21:31:19.093631  274657 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:31:19.093784  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:31:19.093794  274657 machine.go:91] provisioned docker machine in 3.590715689s
	I0108 21:31:19.093801  274657 start.go:300] post-start starting for "old-k8s-version-211828" (driver="docker")
	I0108 21:31:19.093807  274657 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:31:19.093848  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:31:19.093884  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.118184  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.206786  274657 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:31:19.209517  274657 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:31:19.209547  274657 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:31:19.209558  274657 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:31:19.209564  274657 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:31:19.209576  274657 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:31:19.209629  274657 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:31:19.209704  274657 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:31:19.209800  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:31:19.216505  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:19.232916  274657 start.go:303] post-start completed in 139.102319ms
	I0108 21:31:19.232985  274657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:31:19.233025  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.257132  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.339957  274657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:31:19.343759  274657 fix.go:57] fixHost completed within 4.272947567s
	I0108 21:31:19.343776  274657 start.go:83] releasing machines lock for "old-k8s-version-211828", held for 4.272979327s
	I0108 21:31:19.343848  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:19.367793  274657 ssh_runner.go:195] Run: cat /version.json
	I0108 21:31:19.367832  274657 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 21:31:19.367913  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.367840  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.395829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.396770  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.497144  274657 ssh_runner.go:195] Run: systemctl --version
	I0108 21:31:19.501133  274657 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:31:19.512197  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:31:19.521435  274657 docker.go:189] disabling docker service ...
	I0108 21:31:19.521487  274657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:31:19.530733  274657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:31:19.539679  274657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:31:19.619642  274657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:31:19.693532  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:31:19.702588  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:31:19.714970  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0108 21:31:19.723127  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:31:19.730986  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:31:19.738308  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:31:19.746088  274657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:31:19.752009  274657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:31:19.757928  274657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:31:19.836380  274657 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:31:19.899437  274657 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:31:19.899536  274657 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:31:19.903121  274657 start.go:472] Will wait 60s for crictl version
	I0108 21:31:19.903177  274657 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:19.931573  274657 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:31:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:31:30.978568  274657 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:31.001293  274657 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:31:31.001343  274657 ssh_runner.go:195] Run: containerd --version
	I0108 21:31:31.023736  274657 ssh_runner.go:195] Run: containerd --version
	I0108 21:31:31.049215  274657 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.10 ...
	I0108 21:31:31.050855  274657 cli_runner.go:164] Run: docker network inspect old-k8s-version-211828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:31:31.072896  274657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0108 21:31:31.076073  274657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:31:31.087169  274657 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0108 21:31:31.088521  274657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:31:31.088579  274657 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:31:31.110490  274657 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:31:31.110508  274657 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:31:31.110556  274657 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:31:31.133748  274657 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:31:31.133766  274657 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:31:31.133809  274657 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:31:31.156636  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:31.156662  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:31.156675  274657 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:31:31.156688  274657 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-211828 NodeName:old-k8s-version-211828 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:31:31.156817  274657 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-211828"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-211828
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:31:31.156894  274657 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-211828 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:31:31.156938  274657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 21:31:31.164010  274657 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:31:31.164059  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:31:31.170368  274657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0108 21:31:31.182752  274657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:31:31.195402  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0108 21:31:31.207914  274657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:31:31.210710  274657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:31:31.219370  274657 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828 for IP: 192.168.76.2
	I0108 21:31:31.219455  274657 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:31:31.219534  274657 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:31:31.219611  274657 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.key
	I0108 21:31:31.219669  274657 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25
	I0108 21:31:31.219701  274657 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key
	I0108 21:31:31.219785  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:31:31.219813  274657 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:31:31.219822  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:31:31.219849  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:31:31.219874  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:31:31.219895  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:31:31.219944  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:31.220509  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:31:31.237015  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:31:31.253867  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:31:31.270214  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:31:31.286736  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:31:31.303748  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:31:31.321340  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:31:31.338473  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:31:31.355647  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:31:31.372647  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:31:31.389808  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:31:31.406899  274657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:31:31.419384  274657 ssh_runner.go:195] Run: openssl version
	I0108 21:31:31.424189  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:31:31.431623  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.434625  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.434666  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.439324  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:31:31.446001  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:31:31.453698  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.456687  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.456735  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.461571  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:31:31.468289  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:31:31.475322  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.478233  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.478271  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.483024  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:31:31.489456  274657 kubeadm.go:396] StartCluster: {Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:31.489561  274657 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:31:31.489594  274657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:31:31.514364  274657 cri.go:87] found id: "ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a"
	I0108 21:31:31.514386  274657 cri.go:87] found id: "4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25"
	I0108 21:31:31.514401  274657 cri.go:87] found id: "a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70"
	I0108 21:31:31.514407  274657 cri.go:87] found id: "3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4"
	I0108 21:31:31.514412  274657 cri.go:87] found id: "dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d"
	I0108 21:31:31.514419  274657 cri.go:87] found id: "18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330"
	I0108 21:31:31.514424  274657 cri.go:87] found id: ""
	I0108 21:31:31.514460  274657 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:31:31.525493  274657 cri.go:114] JSON = null
	W0108 21:31:31.525551  274657 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:31:31.525611  274657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:31:31.532465  274657 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:31:31.532485  274657 kubeadm.go:627] restartCluster start
	I0108 21:31:31.532526  274657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:31:31.538695  274657 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.539540  274657 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-211828" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:31.539974  274657 kubeconfig.go:146] "old-k8s-version-211828" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:31:31.540778  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:31:31.542454  274657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:31:31.548835  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.548878  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.556574  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.756964  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.757026  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.765711  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.956987  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.957087  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.965822  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.157114  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.157204  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.165572  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.356849  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.356932  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.365936  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.557219  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.557301  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.565818  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.757103  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.757202  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.765601  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.956833  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.956909  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.965592  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.156802  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.156864  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.165214  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.357531  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.357620  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.366024  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.557341  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.557432  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.566047  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.757323  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.757407  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.766123  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.957421  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.957482  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.965897  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.157184  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.157255  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.165750  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.357066  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.357148  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.365686  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.556893  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.556978  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.566772  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.566791  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.566823  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.574472  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.574499  274657 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:31:34.574515  274657 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:31:34.574528  274657 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:31:34.574567  274657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:31:34.600377  274657 cri.go:87] found id: "ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a"
	I0108 21:31:34.600401  274657 cri.go:87] found id: "4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25"
	I0108 21:31:34.600411  274657 cri.go:87] found id: "a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70"
	I0108 21:31:34.600422  274657 cri.go:87] found id: "3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4"
	I0108 21:31:34.600432  274657 cri.go:87] found id: "dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d"
	I0108 21:31:34.600445  274657 cri.go:87] found id: "18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330"
	I0108 21:31:34.600455  274657 cri.go:87] found id: ""
	I0108 21:31:34.600466  274657 cri.go:232] Stopping containers: [ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a 4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25 a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70 3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4 dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d 18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330]
	I0108 21:31:34.600511  274657 ssh_runner.go:195] Run: which crictl
	I0108 21:31:34.603393  274657 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a 4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25 a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70 3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4 dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d 18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330
	I0108 21:31:34.628109  274657 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:31:34.637971  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:31:34.645121  274657 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jan  8 21:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan  8 21:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan  8 21:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan  8 21:18 /etc/kubernetes/scheduler.conf
	
	I0108 21:31:34.645173  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:31:34.651869  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:31:34.658451  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:31:34.665036  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:31:34.671382  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:31:34.677810  274657 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:31:34.677835  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:34.729578  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.487872  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.629926  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.689674  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.830162  274657 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:31:35.830228  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.340067  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.839992  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.851033  274657 api_server.go:71] duration metric: took 1.020878979s to wait for apiserver process to appear ...
	I0108 21:31:36.851064  274657 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:31:36.851078  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:36.851443  274657 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0108 21:31:37.352200  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	da42aae8803ba       d6e3e26021b60       3 minutes ago       Exited              kindnet-cni               3                   de969308cd0da
	640b6f75f7dac       beaaf00edd38a       12 minutes ago      Running             kube-proxy                0                   40574ad3062a4
	7b61203838e94       6d23ec0e8b87e       12 minutes ago      Running             kube-scheduler            0                   62c93aab5d432
	e5292d3c9357a       0346dbd74bcb9       12 minutes ago      Running             kube-apiserver            0                   569500f001a7b
	4777a2f6ea154       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   40770a6daff3e
	1c6e8899fc497       6039992312758       12 minutes ago      Running             kube-controller-manager   0                   594a31cb2e0e2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:19:01 UTC, end at Sun 2023-01-08 21:31:42 UTC. --
	Jan 08 21:25:03 no-preload-211859 containerd[513]: time="2023-01-08T21:25:03.438120197Z" level=warning msg="cleaning up after shim disconnected" id=3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93 namespace=k8s.io
	Jan 08 21:25:03 no-preload-211859 containerd[513]: time="2023-01-08T21:25:03.438136802Z" level=info msg="cleaning up dead shim"
	Jan 08 21:25:03 no-preload-211859 containerd[513]: time="2023-01-08T21:25:03.447797439Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:25:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n"
	Jan 08 21:25:04 no-preload-211859 containerd[513]: time="2023-01-08T21:25:04.160320794Z" level=info msg="RemoveContainer for \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\""
	Jan 08 21:25:04 no-preload-211859 containerd[513]: time="2023-01-08T21:25:04.166638123Z" level=info msg="RemoveContainer for \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\" returns successfully"
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.520239400Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.534094732Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\""
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.534653853Z" level=info msg="StartContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\""
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.624706158Z" level=info msg="StartContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\" returns successfully"
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.061990890Z" level=info msg="shim disconnected" id=ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.062061003Z" level=warning msg="cleaning up after shim disconnected" id=ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42 namespace=k8s.io
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.062078804Z" level=info msg="cleaning up dead shim"
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.070789653Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:27:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3004 runtime=io.containerd.runc.v2\n"
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.473824657Z" level=info msg="RemoveContainer for \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\""
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.480130313Z" level=info msg="RemoveContainer for \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\" returns successfully"
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.519127193Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.531776811Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c\""
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.532257406Z" level=info msg="StartContainer for \"da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c\""
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.625992265Z" level=info msg="StartContainer for \"da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c\" returns successfully"
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.051921889Z" level=info msg="shim disconnected" id=da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.051988781Z" level=warning msg="cleaning up after shim disconnected" id=da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c namespace=k8s.io
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.052005374Z" level=info msg="cleaning up dead shim"
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.061668438Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:31:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3121 runtime=io.containerd.runc.v2\n"
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.798048658Z" level=info msg="RemoveContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\""
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.803161353Z" level=info msg="RemoveContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-211859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-211859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=no-preload-211859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_19_25_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:19:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-211859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:31:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-211859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                1811e86e-6254-4928-9c37-fe78bdd2d83e
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-211859                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-vh4hl                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-211859             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-211859    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-zb6wz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-211859             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x4 over 12m)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node no-preload-211859 event: Registered Node no-preload-211859 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf] <==
	* {"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[19672861] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"106.056222ms","start":"2023-01-08T21:19:38.234Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[19672861] 'process raft request'  (duration: 105.814606ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[1129527188] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"104.142429ms","start":"2023-01-08T21:19:38.236Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[1129527188] 'process raft request'  (duration: 103.925251ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.501086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-vh4hl\" ","response":"range_response_count:1 size:3686"}
	{"level":"info","ts":"2023-01-08T21:19:38.341Z","caller":"traceutil/trace.go:171","msg":"trace[246398554] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-vh4hl; range_end:; response_count:1; response_revision:337; }","duration":"108.584023ms","start":"2023-01-08T21:19:38.232Z","end":"2023-01-08T21:19:38.341Z","steps":["trace[246398554] 'agreement among raft nodes before linearized reading'  (duration: 108.494697ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.476Z","caller":"traceutil/trace.go:171","msg":"trace[882092600] linearizableReadLoop","detail":"{readStateIndex:348; appliedIndex:348; }","duration":"129.175544ms","start":"2023-01-08T21:19:38.346Z","end":"2023-01-08T21:19:38.475Z","steps":["trace[882092600] 'read index received'  (duration: 129.166063ms)","trace[882092600] 'applied index is now lower than readState.Index'  (duration: 8.426µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.558Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.073643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2023-01-08T21:19:38.558Z","caller":"traceutil/trace.go:171","msg":"trace[1428972422] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"212.170328ms","start":"2023-01-08T21:19:38.346Z","end":"2023-01-08T21:19:38.558Z","steps":["trace[1428972422] 'agreement among raft nodes before linearized reading'  (duration: 129.289591ms)","trace[1428972422] 'range keys from in-memory index tree'  (duration: 82.642207ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.558Z","caller":"traceutil/trace.go:171","msg":"trace[1363471911] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"211.643128ms","start":"2023-01-08T21:19:38.347Z","end":"2023-01-08T21:19:38.558Z","steps":["trace[1363471911] 'process raft request'  (duration: 128.755978ms)","trace[1363471911] 'compare'  (duration: 82.707271ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.559Z","caller":"traceutil/trace.go:171","msg":"trace[468553146] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"210.55068ms","start":"2023-01-08T21:19:38.348Z","end":"2023-01-08T21:19:38.559Z","steps":["trace[468553146] 'process raft request'  (duration: 210.431063ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.559Z","caller":"traceutil/trace.go:171","msg":"trace[1822188889] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"206.78465ms","start":"2023-01-08T21:19:38.352Z","end":"2023-01-08T21:19:38.559Z","steps":["trace[1822188889] 'process raft request'  (duration: 206.647266ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.560Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.251419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2023-01-08T21:19:38.560Z","caller":"traceutil/trace.go:171","msg":"trace[1349132424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:341; }","duration":"151.305603ms","start":"2023-01-08T21:19:38.409Z","end":"2023-01-08T21:19:38.560Z","steps":["trace[1349132424] 'agreement among raft nodes before linearized reading'  (duration: 151.227745ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"215.509358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-565d847f94\" ","response":"range_response_count:1 size:3685"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[390813499] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-565d847f94; range_end:; response_count:1; response_revision:349; }","duration":"215.601636ms","start":"2023-01-08T21:19:38.587Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[390813499] 'agreement among raft nodes before linearized reading'  (duration: 122.1355ms)","trace[390813499] 'range keys from in-memory index tree'  (duration: 93.336213ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[1873919722] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"179.476225ms","start":"2023-01-08T21:19:38.623Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[1873919722] 'process raft request'  (duration: 179.422915ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[475140130] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"180.612361ms","start":"2023-01-08T21:19:38.622Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[475140130] 'process raft request'  (duration: 87.149705ms)","trace[475140130] 'compare'  (duration: 93.28857ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"203.972372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[817869447] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:349; }","duration":"204.287434ms","start":"2023-01-08T21:19:38.599Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[817869447] 'agreement among raft nodes before linearized reading'  (duration: 110.561756ms)","trace[817869447] 'range keys from in-memory index tree'  (duration: 93.388837ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.754398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-211859\" ","response":"range_response_count:1 size:3712"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[2130402905] range","detail":"{range_begin:/registry/minions/no-preload-211859; range_end:; response_count:1; response_revision:349; }","duration":"218.239404ms","start":"2023-01-08T21:19:38.585Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[2130402905] 'agreement among raft nodes before linearized reading'  (duration: 124.368132ms)","trace[2130402905] 'range keys from in-memory index tree'  (duration: 93.348555ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.809Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"189.506275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2023-01-08T21:19:38.809Z","caller":"traceutil/trace.go:171","msg":"trace[456320770] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:351; }","duration":"189.575792ms","start":"2023-01-08T21:19:38.619Z","end":"2023-01-08T21:19:38.809Z","steps":["trace[456320770] 'agreement among raft nodes before linearized reading'  (duration: 189.459866ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:39.011Z","caller":"traceutil/trace.go:171","msg":"trace[457235364] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"163.339053ms","start":"2023-01-08T21:19:38.847Z","end":"2023-01-08T21:19:39.011Z","steps":["trace[457235364] 'process raft request'  (duration: 74.231725ms)","trace[457235364] 'compare'  (duration: 88.976454ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:29:19.817Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":448}
	{"level":"info","ts":"2023-01-08T21:29:19.818Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":448,"took":"440.01µs"}
	
	* 
	* ==> kernel <==
	*  21:31:42 up  1:14,  0 users,  load average: 0.61, 0.67, 1.20
	Linux no-preload-211859 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659] <==
	* I0108 21:19:21.742510       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:19:21.809931       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:19:21.821241       1 controller.go:616] quota admission added evaluator for: namespaces
	I0108 21:19:21.831952       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:19:21.832425       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:19:21.832531       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:19:21.833141       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:19:21.839440       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:19:22.504449       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:19:22.736509       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:19:22.739503       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:19:22.739524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:19:23.047398       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:19:23.077104       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:19:23.130858       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0108 21:19:23.135237       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0108 21:19:23.136245       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:19:23.139902       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:19:23.748461       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:19:24.353261       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:19:24.359899       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0108 21:19:24.366666       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:24.431848       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:19:37.979307       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:37.979567       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42] <==
	* I0108 21:19:37.147526       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0108 21:19:37.147538       1 taint_manager.go:209] "Sending events to api server"
	W0108 21:19:37.147611       1 node_lifecycle_controller.go:1058] Missing timestamp for Node no-preload-211859. Assuming now as a timestamp.
	I0108 21:19:37.147654       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0108 21:19:37.147730       1 event.go:294] "Event occurred" object="no-preload-211859" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-211859 event: Registered Node no-preload-211859 in Controller"
	I0108 21:19:37.148367       1 shared_informer.go:262] Caches are synced for GC
	I0108 21:19:37.179634       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:19:37.190768       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 21:19:37.198281       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:19:37.198284       1 shared_informer.go:262] Caches are synced for cronjob
	I0108 21:19:37.203901       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:19:37.215008       1 shared_informer.go:262] Caches are synced for ephemeral
	I0108 21:19:37.240454       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 21:19:37.244863       1 shared_informer.go:262] Caches are synced for PVC protection
	I0108 21:19:37.249282       1 shared_informer.go:262] Caches are synced for persistent volume
	I0108 21:19:37.623965       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:19:37.698165       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:19:37.698192       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:38.152229       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0108 21:19:38.156739       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vh4hl"
	I0108 21:19:38.232916       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zb6wz"
	I0108 21:19:38.562867       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-6zc6h"
	I0108 21:19:38.563087       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0108 21:19:38.584331       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-jw8vf"
	I0108 21:19:38.832932       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-6zc6h"
	
	* 
	* ==> kube-proxy [640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6] <==
	* I0108 21:19:39.345755       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0108 21:19:39.345825       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0108 21:19:39.345855       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:19:39.365639       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:19:39.365673       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:19:39.365686       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:19:39.365706       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:19:39.365730       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:19:39.365898       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:19:39.366196       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:19:39.366220       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:19:39.366849       1 config.go:444] "Starting node config controller"
	I0108 21:19:39.366868       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:19:39.367048       1 config.go:317] "Starting service config controller"
	I0108 21:19:39.367072       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:19:39.367244       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:19:39.367262       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:19:39.467069       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:19:39.467273       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:19:39.467307       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1] <==
	* W0108 21:19:21.827015       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:21.827283       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:19:21.827006       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:19:21.827309       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:19:21.827107       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:19:21.827324       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:19:21.827566       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:21.827589       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:22.668548       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:19:22.668587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:19:22.676570       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:19:22.676605       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:19:22.742158       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:22.742193       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:22.795464       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:19:22.795534       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:19:22.816566       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:19:22.816605       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:19:22.836431       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:19:22.836467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:19:22.885562       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:19:22.885594       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:19:22.897861       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:22.897897       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 21:19:25.223767       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:19:01 UTC, end at Sun 2023-01-08 21:31:42 UTC. --
	Jan 08 21:30:24 no-preload-211859 kubelet[1743]: E0108 21:30:24.815586    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:29 no-preload-211859 kubelet[1743]: E0108 21:30:29.816810    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:34 no-preload-211859 kubelet[1743]: E0108 21:30:34.817759    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:39 no-preload-211859 kubelet[1743]: E0108 21:30:39.819334    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:44 no-preload-211859 kubelet[1743]: E0108 21:30:44.820769    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:49 no-preload-211859 kubelet[1743]: E0108 21:30:49.821801    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:54 no-preload-211859 kubelet[1743]: E0108 21:30:54.823236    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:59 no-preload-211859 kubelet[1743]: E0108 21:30:59.824605    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: I0108 21:31:04.796941    1743 scope.go:115] "RemoveContainer" containerID="ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42"
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: I0108 21:31:04.797249    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: E0108 21:31:04.797591    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: E0108 21:31:04.825269    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:09 no-preload-211859 kubelet[1743]: E0108 21:31:09.827020    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:14 no-preload-211859 kubelet[1743]: E0108 21:31:14.828207    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:16 no-preload-211859 kubelet[1743]: I0108 21:31:16.516981    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:16 no-preload-211859 kubelet[1743]: E0108 21:31:16.517293    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	Jan 08 21:31:19 no-preload-211859 kubelet[1743]: E0108 21:31:19.829102    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:24 no-preload-211859 kubelet[1743]: E0108 21:31:24.830716    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:27 no-preload-211859 kubelet[1743]: I0108 21:31:27.517205    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:27 no-preload-211859 kubelet[1743]: E0108 21:31:27.517486    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	Jan 08 21:31:29 no-preload-211859 kubelet[1743]: E0108 21:31:29.832229    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:34 no-preload-211859 kubelet[1743]: E0108 21:31:34.833824    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:39 no-preload-211859 kubelet[1743]: E0108 21:31:39.834824    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:42 no-preload-211859 kubelet[1743]: I0108 21:31:42.517006    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:42 no-preload-211859 kubelet[1743]: E0108 21:31:42.517275    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-211859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-jw8vf storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-211859 describe pod busybox coredns-565d847f94-jw8vf storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-211859 describe pod busybox coredns-565d847f94-jw8vf storage-provisioner: exit status 1 (67.88439ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9txc5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9txc5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m48s (x2 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-jw8vf" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-211859 describe pod busybox coredns-565d847f94-jw8vf storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-211859
helpers_test.go:235: (dbg) docker inspect no-preload-211859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65",
	        "Created": "2023-01-08T21:19:00.370984432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:19:00.742893962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hosts",
	        "LogPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65-json.log",
	        "Name": "/no-preload-211859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-211859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-211859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-211859",
	                "Source": "/var/lib/docker/volumes/no-preload-211859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-211859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-211859",
	                "name.minikube.sigs.k8s.io": "no-preload-211859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6412d705758b0fa3708816e7c5f6b0b6bfa26c10bbbc6e3acea6f602d9c2dab3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6412d705758b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-211859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "23cabd631389",
	                        "no-preload-211859"
	                    ],
	                    "NetworkID": "f6ac14d41355072c0829af36f4aed661fe422e2af93237ea348f6b100ade02e6",
	                    "EndpointID": "2f14131c7e47074512e155979b67d1e3a5303bb55db398f44880c21804eebda9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-211859 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | disable-driver-mounts-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC | 08 Jan 23 21:19 UTC |
	|         | disable-driver-mounts-211952                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:19 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-211950                | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:20 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:20 UTC | 08 Jan 23 21:21 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-211950                     | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:26 UTC |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-211950 sudo                                 | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:31:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:31:14.786818  274657 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:14.787251  274657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:14.787265  274657 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:14.787272  274657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:14.787427  274657 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:31:14.788057  274657 out.go:303] Setting JSON to false
	I0108 21:31:14.789452  274657 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4424,"bootTime":1673209051,"procs":560,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:14.789560  274657 start.go:135] virtualization: kvm guest
	I0108 21:31:14.792273  274657 out.go:177] * [old-k8s-version-211828] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:14.793736  274657 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:31:14.793706  274657 notify.go:220] Checking for updates...
	I0108 21:31:14.796380  274657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:14.797863  274657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:14.799587  274657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:31:14.801298  274657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:14.803317  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:31:14.805219  274657 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0108 21:31:14.806495  274657 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:31:14.836588  274657 docker.go:137] docker version: linux-20.10.22
	I0108 21:31:14.836697  274657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:14.935102  274657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:14.857932215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:14.935250  274657 docker.go:254] overlay module found
	I0108 21:31:14.937603  274657 out.go:177] * Using the docker driver based on existing profile
	I0108 21:31:14.939308  274657 start.go:294] selected driver: docker
	I0108 21:31:14.939320  274657 start.go:838] validating driver "docker" against &{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:14.939425  274657 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:14.940295  274657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:15.037391  274657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:14.960690951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:15.037661  274657 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:15.037690  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:15.037701  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:15.037727  274657 start_flags.go:317] config:
	{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:15.040192  274657 out.go:177] * Starting control plane node old-k8s-version-211828 in cluster old-k8s-version-211828
	I0108 21:31:15.041641  274657 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:31:15.043001  274657 out.go:177] * Pulling base image ...
	I0108 21:31:15.044447  274657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:31:15.044499  274657 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 21:31:15.044507  274657 cache.go:57] Caching tarball of preloaded images
	I0108 21:31:15.044542  274657 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:31:15.044751  274657 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:31:15.044768  274657 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0108 21:31:15.044879  274657 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:31:15.070621  274657 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:31:15.070646  274657 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:31:15.070659  274657 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:31:15.070696  274657 start.go:364] acquiring machines lock for old-k8s-version-211828: {Name:mk7415b788fbdcf6791633774a550ddef2131776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:15.070786  274657 start.go:368] acquired machines lock for "old-k8s-version-211828" in 67.237µs
	I0108 21:31:15.070803  274657 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:15.070808  274657 fix.go:55] fixHost starting: 
	I0108 21:31:15.071007  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:31:15.094712  274657 fix.go:103] recreateIfNeeded on old-k8s-version-211828: state=Stopped err=<nil>
	W0108 21:31:15.094743  274657 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:31:15.097062  274657 out.go:177] * Restarting existing docker container for "old-k8s-version-211828" ...
	I0108 21:31:15.098676  274657 cli_runner.go:164] Run: docker start old-k8s-version-211828
	I0108 21:31:15.451736  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:31:15.477931  274657 kic.go:415] container "old-k8s-version-211828" state is running.
	I0108 21:31:15.478259  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:15.502791  274657 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:31:15.503068  274657 machine.go:88] provisioning docker machine ...
	I0108 21:31:15.503092  274657 ubuntu.go:169] provisioning hostname "old-k8s-version-211828"
	I0108 21:31:15.503141  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:15.527135  274657 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:15.527388  274657 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0108 21:31:15.527414  274657 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-211828 && echo "old-k8s-version-211828" | sudo tee /etc/hostname
	I0108 21:31:15.528154  274657 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49092->127.0.0.1:33047: read: connection reset by peer
	I0108 21:31:18.652158  274657 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-211828
	
	I0108 21:31:18.652235  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:18.677352  274657 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:18.677632  274657 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0108 21:31:18.677662  274657 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-211828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-211828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-211828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:31:18.791306  274657 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:31:18.791338  274657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:31:18.791356  274657 ubuntu.go:177] setting up certificates
	I0108 21:31:18.791364  274657 provision.go:83] configureAuth start
	I0108 21:31:18.791407  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:18.815953  274657 provision.go:138] copyHostCerts
	I0108 21:31:18.816006  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:31:18.816012  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:31:18.816081  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:31:18.816177  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:31:18.816185  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:31:18.816212  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:31:18.816273  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:31:18.816281  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:31:18.816304  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:31:18.816348  274657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-211828 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-211828]
	I0108 21:31:18.931118  274657 provision.go:172] copyRemoteCerts
	I0108 21:31:18.931183  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:31:18.931217  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:18.955719  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.042817  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:31:19.060612  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:31:19.077223  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:31:19.093605  274657 provision.go:86] duration metric: configureAuth took 302.219123ms
	I0108 21:31:19.093631  274657 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:31:19.093784  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:31:19.093794  274657 machine.go:91] provisioned docker machine in 3.590715689s
	I0108 21:31:19.093801  274657 start.go:300] post-start starting for "old-k8s-version-211828" (driver="docker")
	I0108 21:31:19.093807  274657 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:31:19.093848  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:31:19.093884  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.118184  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.206786  274657 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:31:19.209517  274657 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:31:19.209547  274657 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:31:19.209558  274657 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:31:19.209564  274657 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:31:19.209576  274657 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:31:19.209629  274657 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:31:19.209704  274657 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:31:19.209800  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:31:19.216505  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:19.232916  274657 start.go:303] post-start completed in 139.102319ms
	I0108 21:31:19.232985  274657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:31:19.233025  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.257132  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.339957  274657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:31:19.343759  274657 fix.go:57] fixHost completed within 4.272947567s
	I0108 21:31:19.343776  274657 start.go:83] releasing machines lock for "old-k8s-version-211828", held for 4.272979327s
	I0108 21:31:19.343848  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:19.367793  274657 ssh_runner.go:195] Run: cat /version.json
	I0108 21:31:19.367832  274657 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 21:31:19.367913  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.367840  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.395829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.396770  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.497144  274657 ssh_runner.go:195] Run: systemctl --version
	I0108 21:31:19.501133  274657 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:31:19.512197  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:31:19.521435  274657 docker.go:189] disabling docker service ...
	I0108 21:31:19.521487  274657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:31:19.530733  274657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:31:19.539679  274657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:31:19.619642  274657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:31:19.693532  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:31:19.702588  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:31:19.714970  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0108 21:31:19.723127  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:31:19.730986  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:31:19.738308  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:31:19.746088  274657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:31:19.752009  274657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:31:19.757928  274657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:31:19.836380  274657 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:31:19.899437  274657 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:31:19.899536  274657 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:31:19.903121  274657 start.go:472] Will wait 60s for crictl version
	I0108 21:31:19.903177  274657 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:19.931573  274657 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:31:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:31:30.978568  274657 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:31.001293  274657 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:31:31.001343  274657 ssh_runner.go:195] Run: containerd --version
	I0108 21:31:31.023736  274657 ssh_runner.go:195] Run: containerd --version
	I0108 21:31:31.049215  274657 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.10 ...
	I0108 21:31:31.050855  274657 cli_runner.go:164] Run: docker network inspect old-k8s-version-211828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:31:31.072896  274657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0108 21:31:31.076073  274657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:31:31.087169  274657 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0108 21:31:31.088521  274657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:31:31.088579  274657 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:31:31.110490  274657 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:31:31.110508  274657 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:31:31.110556  274657 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:31:31.133748  274657 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:31:31.133766  274657 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:31:31.133809  274657 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:31:31.156636  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:31.156662  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:31.156675  274657 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:31:31.156688  274657 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-211828 NodeName:old-k8s-version-211828 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:31:31.156817  274657 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-211828"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-211828
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:31:31.156894  274657 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-211828 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:31:31.156938  274657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 21:31:31.164010  274657 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:31:31.164059  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:31:31.170368  274657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0108 21:31:31.182752  274657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:31:31.195402  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0108 21:31:31.207914  274657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:31:31.210710  274657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:31:31.219370  274657 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828 for IP: 192.168.76.2
	I0108 21:31:31.219455  274657 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:31:31.219534  274657 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:31:31.219611  274657 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.key
	I0108 21:31:31.219669  274657 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25
	I0108 21:31:31.219701  274657 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key
	I0108 21:31:31.219785  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:31:31.219813  274657 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:31:31.219822  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:31:31.219849  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:31:31.219874  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:31:31.219895  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:31:31.219944  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:31.220509  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:31:31.237015  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:31:31.253867  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:31:31.270214  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:31:31.286736  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:31:31.303748  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:31:31.321340  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:31:31.338473  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:31:31.355647  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:31:31.372647  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:31:31.389808  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:31:31.406899  274657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:31:31.419384  274657 ssh_runner.go:195] Run: openssl version
	I0108 21:31:31.424189  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:31:31.431623  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.434625  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.434666  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.439324  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:31:31.446001  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:31:31.453698  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.456687  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.456735  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.461571  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:31:31.468289  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:31:31.475322  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.478233  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.478271  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.483024  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:31:31.489456  274657 kubeadm.go:396] StartCluster: {Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:31.489561  274657 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:31:31.489594  274657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:31:31.514364  274657 cri.go:87] found id: "ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a"
	I0108 21:31:31.514386  274657 cri.go:87] found id: "4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25"
	I0108 21:31:31.514401  274657 cri.go:87] found id: "a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70"
	I0108 21:31:31.514407  274657 cri.go:87] found id: "3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4"
	I0108 21:31:31.514412  274657 cri.go:87] found id: "dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d"
	I0108 21:31:31.514419  274657 cri.go:87] found id: "18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330"
	I0108 21:31:31.514424  274657 cri.go:87] found id: ""
	I0108 21:31:31.514460  274657 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:31:31.525493  274657 cri.go:114] JSON = null
	W0108 21:31:31.525551  274657 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:31:31.525611  274657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:31:31.532465  274657 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:31:31.532485  274657 kubeadm.go:627] restartCluster start
	I0108 21:31:31.532526  274657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:31:31.538695  274657 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.539540  274657 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-211828" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:31.539974  274657 kubeconfig.go:146] "old-k8s-version-211828" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:31:31.540778  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:31:31.542454  274657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:31:31.548835  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.548878  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.556574  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.756964  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.757026  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.765711  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.956987  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.957087  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.965822  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.157114  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.157204  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.165572  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.356849  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.356932  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.365936  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.557219  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.557301  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.565818  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.757103  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.757202  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.765601  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.956833  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.956909  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.965592  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.156802  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.156864  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.165214  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.357531  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.357620  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.366024  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.557341  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.557432  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.566047  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.757323  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.757407  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.766123  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.957421  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.957482  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.965897  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.157184  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.157255  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.165750  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.357066  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.357148  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.365686  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.556893  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.556978  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.566772  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.566791  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.566823  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.574472  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.574499  274657 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:31:34.574515  274657 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:31:34.574528  274657 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:31:34.574567  274657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:31:34.600377  274657 cri.go:87] found id: "ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a"
	I0108 21:31:34.600401  274657 cri.go:87] found id: "4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25"
	I0108 21:31:34.600411  274657 cri.go:87] found id: "a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70"
	I0108 21:31:34.600422  274657 cri.go:87] found id: "3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4"
	I0108 21:31:34.600432  274657 cri.go:87] found id: "dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d"
	I0108 21:31:34.600445  274657 cri.go:87] found id: "18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330"
	I0108 21:31:34.600455  274657 cri.go:87] found id: ""
	I0108 21:31:34.600466  274657 cri.go:232] Stopping containers: [ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a 4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25 a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70 3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4 dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d 18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330]
	I0108 21:31:34.600511  274657 ssh_runner.go:195] Run: which crictl
	I0108 21:31:34.603393  274657 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a 4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25 a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70 3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4 dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d 18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330
	I0108 21:31:34.628109  274657 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:31:34.637971  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:31:34.645121  274657 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jan  8 21:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan  8 21:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan  8 21:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan  8 21:18 /etc/kubernetes/scheduler.conf
	
	I0108 21:31:34.645173  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:31:34.651869  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:31:34.658451  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:31:34.665036  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:31:34.671382  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:31:34.677810  274657 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:31:34.677835  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:34.729578  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.487872  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.629926  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.689674  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.830162  274657 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:31:35.830228  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.340067  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.839992  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.851033  274657 api_server.go:71] duration metric: took 1.020878979s to wait for apiserver process to appear ...
	I0108 21:31:36.851064  274657 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:31:36.851078  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:36.851443  274657 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0108 21:31:37.352200  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:40.719294  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:40.719336  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:40.852636  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:41.014451  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:41.014482  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:41.352649  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:41.360506  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:41.360537  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:41.852425  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:41.856815  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:41.856840  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:42.352410  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:42.357175  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0108 21:31:42.364333  274657 api_server.go:140] control plane version: v1.16.0
	I0108 21:31:42.364358  274657 api_server.go:130] duration metric: took 5.513286094s to wait for apiserver health ...
	I0108 21:31:42.364370  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:42.364378  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:42.366614  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	da42aae8803ba       d6e3e26021b60       3 minutes ago       Exited              kindnet-cni               3                   de969308cd0da
	640b6f75f7dac       beaaf00edd38a       12 minutes ago      Running             kube-proxy                0                   40574ad3062a4
	7b61203838e94       6d23ec0e8b87e       12 minutes ago      Running             kube-scheduler            0                   62c93aab5d432
	e5292d3c9357a       0346dbd74bcb9       12 minutes ago      Running             kube-apiserver            0                   569500f001a7b
	4777a2f6ea154       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   40770a6daff3e
	1c6e8899fc497       6039992312758       12 minutes ago      Running             kube-controller-manager   0                   594a31cb2e0e2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:19:01 UTC, end at Sun 2023-01-08 21:31:44 UTC. --
	Jan 08 21:25:03 no-preload-211859 containerd[513]: time="2023-01-08T21:25:03.438120197Z" level=warning msg="cleaning up after shim disconnected" id=3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93 namespace=k8s.io
	Jan 08 21:25:03 no-preload-211859 containerd[513]: time="2023-01-08T21:25:03.438136802Z" level=info msg="cleaning up dead shim"
	Jan 08 21:25:03 no-preload-211859 containerd[513]: time="2023-01-08T21:25:03.447797439Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:25:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n"
	Jan 08 21:25:04 no-preload-211859 containerd[513]: time="2023-01-08T21:25:04.160320794Z" level=info msg="RemoveContainer for \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\""
	Jan 08 21:25:04 no-preload-211859 containerd[513]: time="2023-01-08T21:25:04.166638123Z" level=info msg="RemoveContainer for \"01444440cdfa75043bc853a535682e21b80b51a68a4c83045397f1599c379c38\" returns successfully"
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.520239400Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.534094732Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\""
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.534653853Z" level=info msg="StartContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\""
	Jan 08 21:25:18 no-preload-211859 containerd[513]: time="2023-01-08T21:25:18.624706158Z" level=info msg="StartContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\" returns successfully"
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.061990890Z" level=info msg="shim disconnected" id=ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.062061003Z" level=warning msg="cleaning up after shim disconnected" id=ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42 namespace=k8s.io
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.062078804Z" level=info msg="cleaning up dead shim"
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.070789653Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:27:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3004 runtime=io.containerd.runc.v2\n"
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.473824657Z" level=info msg="RemoveContainer for \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\""
	Jan 08 21:27:59 no-preload-211859 containerd[513]: time="2023-01-08T21:27:59.480130313Z" level=info msg="RemoveContainer for \"3b86738431af400844575d5347e086e6633f433667bd95cc38980c627cb9bf93\" returns successfully"
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.519127193Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.531776811Z" level=info msg="CreateContainer within sandbox \"de969308cd0da9bfec2cf38136673604413fd525fb7e1e2091093cb72e00e62d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c\""
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.532257406Z" level=info msg="StartContainer for \"da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c\""
	Jan 08 21:28:23 no-preload-211859 containerd[513]: time="2023-01-08T21:28:23.625992265Z" level=info msg="StartContainer for \"da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c\" returns successfully"
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.051921889Z" level=info msg="shim disconnected" id=da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.051988781Z" level=warning msg="cleaning up after shim disconnected" id=da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c namespace=k8s.io
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.052005374Z" level=info msg="cleaning up dead shim"
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.061668438Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:31:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3121 runtime=io.containerd.runc.v2\n"
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.798048658Z" level=info msg="RemoveContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\""
	Jan 08 21:31:04 no-preload-211859 containerd[513]: time="2023-01-08T21:31:04.803161353Z" level=info msg="RemoveContainer for \"ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-211859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-211859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=no-preload-211859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_19_25_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:19:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-211859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:31:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:30:07 +0000   Sun, 08 Jan 2023 21:19:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-211859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                1811e86e-6254-4928-9c37-fe78bdd2d83e
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-211859                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-vh4hl                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-211859             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-211859    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-zb6wz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-211859             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x4 over 12m)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node no-preload-211859 event: Registered Node no-preload-211859 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf] <==
	* {"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[19672861] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"106.056222ms","start":"2023-01-08T21:19:38.234Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[19672861] 'process raft request'  (duration: 105.814606ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.340Z","caller":"traceutil/trace.go:171","msg":"trace[1129527188] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"104.142429ms","start":"2023-01-08T21:19:38.236Z","end":"2023-01-08T21:19:38.340Z","steps":["trace[1129527188] 'process raft request'  (duration: 103.925251ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.341Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.501086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-vh4hl\" ","response":"range_response_count:1 size:3686"}
	{"level":"info","ts":"2023-01-08T21:19:38.341Z","caller":"traceutil/trace.go:171","msg":"trace[246398554] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-vh4hl; range_end:; response_count:1; response_revision:337; }","duration":"108.584023ms","start":"2023-01-08T21:19:38.232Z","end":"2023-01-08T21:19:38.341Z","steps":["trace[246398554] 'agreement among raft nodes before linearized reading'  (duration: 108.494697ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.476Z","caller":"traceutil/trace.go:171","msg":"trace[882092600] linearizableReadLoop","detail":"{readStateIndex:348; appliedIndex:348; }","duration":"129.175544ms","start":"2023-01-08T21:19:38.346Z","end":"2023-01-08T21:19:38.475Z","steps":["trace[882092600] 'read index received'  (duration: 129.166063ms)","trace[882092600] 'applied index is now lower than readState.Index'  (duration: 8.426µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.558Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.073643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2023-01-08T21:19:38.558Z","caller":"traceutil/trace.go:171","msg":"trace[1428972422] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"212.170328ms","start":"2023-01-08T21:19:38.346Z","end":"2023-01-08T21:19:38.558Z","steps":["trace[1428972422] 'agreement among raft nodes before linearized reading'  (duration: 129.289591ms)","trace[1428972422] 'range keys from in-memory index tree'  (duration: 82.642207ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.558Z","caller":"traceutil/trace.go:171","msg":"trace[1363471911] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"211.643128ms","start":"2023-01-08T21:19:38.347Z","end":"2023-01-08T21:19:38.558Z","steps":["trace[1363471911] 'process raft request'  (duration: 128.755978ms)","trace[1363471911] 'compare'  (duration: 82.707271ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.559Z","caller":"traceutil/trace.go:171","msg":"trace[468553146] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"210.55068ms","start":"2023-01-08T21:19:38.348Z","end":"2023-01-08T21:19:38.559Z","steps":["trace[468553146] 'process raft request'  (duration: 210.431063ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.559Z","caller":"traceutil/trace.go:171","msg":"trace[1822188889] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"206.78465ms","start":"2023-01-08T21:19:38.352Z","end":"2023-01-08T21:19:38.559Z","steps":["trace[1822188889] 'process raft request'  (duration: 206.647266ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.560Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.251419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2023-01-08T21:19:38.560Z","caller":"traceutil/trace.go:171","msg":"trace[1349132424] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:341; }","duration":"151.305603ms","start":"2023-01-08T21:19:38.409Z","end":"2023-01-08T21:19:38.560Z","steps":["trace[1349132424] 'agreement among raft nodes before linearized reading'  (duration: 151.227745ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"215.509358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-565d847f94\" ","response":"range_response_count:1 size:3685"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[390813499] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-565d847f94; range_end:; response_count:1; response_revision:349; }","duration":"215.601636ms","start":"2023-01-08T21:19:38.587Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[390813499] 'agreement among raft nodes before linearized reading'  (duration: 122.1355ms)","trace[390813499] 'range keys from in-memory index tree'  (duration: 93.336213ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[1873919722] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"179.476225ms","start":"2023-01-08T21:19:38.623Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[1873919722] 'process raft request'  (duration: 179.422915ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[475140130] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"180.612361ms","start":"2023-01-08T21:19:38.622Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[475140130] 'process raft request'  (duration: 87.149705ms)","trace[475140130] 'compare'  (duration: 93.28857ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"203.972372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[817869447] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:349; }","duration":"204.287434ms","start":"2023-01-08T21:19:38.599Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[817869447] 'agreement among raft nodes before linearized reading'  (duration: 110.561756ms)","trace[817869447] 'range keys from in-memory index tree'  (duration: 93.388837ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.803Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.754398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-211859\" ","response":"range_response_count:1 size:3712"}
	{"level":"info","ts":"2023-01-08T21:19:38.803Z","caller":"traceutil/trace.go:171","msg":"trace[2130402905] range","detail":"{range_begin:/registry/minions/no-preload-211859; range_end:; response_count:1; response_revision:349; }","duration":"218.239404ms","start":"2023-01-08T21:19:38.585Z","end":"2023-01-08T21:19:38.803Z","steps":["trace[2130402905] 'agreement among raft nodes before linearized reading'  (duration: 124.368132ms)","trace[2130402905] 'range keys from in-memory index tree'  (duration: 93.348555ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:19:38.809Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"189.506275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2023-01-08T21:19:38.809Z","caller":"traceutil/trace.go:171","msg":"trace[456320770] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:351; }","duration":"189.575792ms","start":"2023-01-08T21:19:38.619Z","end":"2023-01-08T21:19:38.809Z","steps":["trace[456320770] 'agreement among raft nodes before linearized reading'  (duration: 189.459866ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:19:39.011Z","caller":"traceutil/trace.go:171","msg":"trace[457235364] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"163.339053ms","start":"2023-01-08T21:19:38.847Z","end":"2023-01-08T21:19:39.011Z","steps":["trace[457235364] 'process raft request'  (duration: 74.231725ms)","trace[457235364] 'compare'  (duration: 88.976454ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-08T21:29:19.817Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":448}
	{"level":"info","ts":"2023-01-08T21:29:19.818Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":448,"took":"440.01µs"}
	
	* 
	* ==> kernel <==
	*  21:31:44 up  1:14,  0 users,  load average: 0.61, 0.67, 1.20
	Linux no-preload-211859 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659] <==
	* I0108 21:19:21.742510       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:19:21.809931       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:19:21.821241       1 controller.go:616] quota admission added evaluator for: namespaces
	I0108 21:19:21.831952       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:19:21.832425       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:19:21.832531       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:19:21.833141       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:19:21.839440       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:19:22.504449       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:19:22.736509       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:19:22.739503       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:19:22.739524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:19:23.047398       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:19:23.077104       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:19:23.130858       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0108 21:19:23.135237       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0108 21:19:23.136245       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:19:23.139902       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:19:23.748461       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:19:24.353261       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:19:24.359899       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0108 21:19:24.366666       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:24.431848       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:19:37.979307       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:19:37.979567       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42] <==
	* I0108 21:19:37.147526       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0108 21:19:37.147538       1 taint_manager.go:209] "Sending events to api server"
	W0108 21:19:37.147611       1 node_lifecycle_controller.go:1058] Missing timestamp for Node no-preload-211859. Assuming now as a timestamp.
	I0108 21:19:37.147654       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0108 21:19:37.147730       1 event.go:294] "Event occurred" object="no-preload-211859" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-211859 event: Registered Node no-preload-211859 in Controller"
	I0108 21:19:37.148367       1 shared_informer.go:262] Caches are synced for GC
	I0108 21:19:37.179634       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:19:37.190768       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 21:19:37.198281       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:19:37.198284       1 shared_informer.go:262] Caches are synced for cronjob
	I0108 21:19:37.203901       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:19:37.215008       1 shared_informer.go:262] Caches are synced for ephemeral
	I0108 21:19:37.240454       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 21:19:37.244863       1 shared_informer.go:262] Caches are synced for PVC protection
	I0108 21:19:37.249282       1 shared_informer.go:262] Caches are synced for persistent volume
	I0108 21:19:37.623965       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:19:37.698165       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:19:37.698192       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:19:38.152229       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0108 21:19:38.156739       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vh4hl"
	I0108 21:19:38.232916       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zb6wz"
	I0108 21:19:38.562867       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-6zc6h"
	I0108 21:19:38.563087       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0108 21:19:38.584331       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-jw8vf"
	I0108 21:19:38.832932       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-6zc6h"
	
	* 
	* ==> kube-proxy [640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6] <==
	* I0108 21:19:39.345755       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0108 21:19:39.345825       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0108 21:19:39.345855       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:19:39.365639       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:19:39.365673       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:19:39.365686       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:19:39.365706       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:19:39.365730       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:19:39.365898       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:19:39.366196       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:19:39.366220       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:19:39.366849       1 config.go:444] "Starting node config controller"
	I0108 21:19:39.366868       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:19:39.367048       1 config.go:317] "Starting service config controller"
	I0108 21:19:39.367072       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:19:39.367244       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:19:39.367262       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:19:39.467069       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:19:39.467273       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:19:39.467307       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1] <==
	* W0108 21:19:21.827015       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:21.827283       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:19:21.827006       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:19:21.827309       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:19:21.827107       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:19:21.827324       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:19:21.827566       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:21.827589       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:22.668548       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:19:22.668587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:19:22.676570       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:19:22.676605       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:19:22.742158       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:22.742193       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:22.795464       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:19:22.795534       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:19:22.816566       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:19:22.816605       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:19:22.836431       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:19:22.836467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:19:22.885562       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:19:22.885594       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:19:22.897861       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:22.897897       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 21:19:25.223767       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:19:01 UTC, end at Sun 2023-01-08 21:31:44 UTC. --
	Jan 08 21:30:24 no-preload-211859 kubelet[1743]: E0108 21:30:24.815586    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:29 no-preload-211859 kubelet[1743]: E0108 21:30:29.816810    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:34 no-preload-211859 kubelet[1743]: E0108 21:30:34.817759    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:39 no-preload-211859 kubelet[1743]: E0108 21:30:39.819334    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:44 no-preload-211859 kubelet[1743]: E0108 21:30:44.820769    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:49 no-preload-211859 kubelet[1743]: E0108 21:30:49.821801    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:54 no-preload-211859 kubelet[1743]: E0108 21:30:54.823236    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:30:59 no-preload-211859 kubelet[1743]: E0108 21:30:59.824605    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: I0108 21:31:04.796941    1743 scope.go:115] "RemoveContainer" containerID="ee5c26b5be6693eba12509c2b026b6926fe69201676dd458a85e373d53fbbd42"
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: I0108 21:31:04.797249    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: E0108 21:31:04.797591    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	Jan 08 21:31:04 no-preload-211859 kubelet[1743]: E0108 21:31:04.825269    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:09 no-preload-211859 kubelet[1743]: E0108 21:31:09.827020    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:14 no-preload-211859 kubelet[1743]: E0108 21:31:14.828207    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:16 no-preload-211859 kubelet[1743]: I0108 21:31:16.516981    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:16 no-preload-211859 kubelet[1743]: E0108 21:31:16.517293    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	Jan 08 21:31:19 no-preload-211859 kubelet[1743]: E0108 21:31:19.829102    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:24 no-preload-211859 kubelet[1743]: E0108 21:31:24.830716    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:27 no-preload-211859 kubelet[1743]: I0108 21:31:27.517205    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:27 no-preload-211859 kubelet[1743]: E0108 21:31:27.517486    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	Jan 08 21:31:29 no-preload-211859 kubelet[1743]: E0108 21:31:29.832229    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:34 no-preload-211859 kubelet[1743]: E0108 21:31:34.833824    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:39 no-preload-211859 kubelet[1743]: E0108 21:31:39.834824    1743 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:42 no-preload-211859 kubelet[1743]: I0108 21:31:42.517006    1743 scope.go:115] "RemoveContainer" containerID="da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	Jan 08 21:31:42 no-preload-211859 kubelet[1743]: E0108 21:31:42.517275    1743 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vh4hl_kube-system(c002c329-15ad-4066-8f90-bee3d9d18431)\"" pod="kube-system/kindnet-vh4hl" podUID=c002c329-15ad-4066-8f90-bee3d9d18431
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-211859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-jw8vf storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-211859 describe pod busybox coredns-565d847f94-jw8vf storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-211859 describe pod busybox coredns-565d847f94-jw8vf storage-provisioner: exit status 1 (69.874175ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9txc5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9txc5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m50s (x2 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-jw8vf" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-211859 describe pod busybox coredns-565d847f94-jw8vf storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (484.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (484.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [30f6f999-e041-45e7-9d60-f78e08279ce2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0108 21:24:58.613366   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:25:15.378550   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:25:37.884042   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:25:50.301951   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:25:56.111968   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/default-k8s-diff-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
start_stop_delete_test.go:196: TestStartStop/group/default-k8s-diff-port/serial/DeployApp: showing logs for failed pods as of 2023-01-08 21:32:42.427513952 +0000 UTC m=+3924.570686243
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context default-k8s-diff-port-211952 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrnzp (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-vrnzp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  2m45s (x2 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context default-k8s-diff-port-211952 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-211952
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-211952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a",
	        "Created": "2023-01-08T21:20:01.150415833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:20:01.544064591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hostname",
	        "HostsPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hosts",
	        "LogPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a-json.log",
	        "Name": "/default-k8s-diff-port-211952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-211952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-211952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-211952",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-211952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-211952",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2bf33dbb62611d9560108b1c0a529546771fed3ac5d99ff62eef897f847b173",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2bf33dbb626",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-211952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "553ec1d733bb",
	                        "default-k8s-diff-port-211952"
	                    ],
	                    "NetworkID": "dac77270e17703c586bb819b54d2f7262cc084b9a2efd9432712b1970a60294f",
	                    "EndpointID": "3d04b80dad9440ee7c222e1d09648b2670e8c59dcc8578b2d8550cd138b1734d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-211952 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-211950                     | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:26 UTC |
	|         | --memory=2200                                              |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	| ssh     | -p embed-certs-211950 sudo                                 | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | crictl images -o json                                      |                        |         |         |                     |                     |
	| pause   | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| unpause | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                        |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                        |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                        |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                        |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                        |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                        |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                        |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                        |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                        |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                        |         |         |                     |                     |
	|         | --kvm-network=default                                      |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                        |         |         |                     |                     |
	|         | --keep-context=false                                       |                        |         |         |                     |                     |
	|         | --driver=docker                                            |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                        |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                        |         |         |                     |                     |
	|         | --alsologtostderr                                          |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                        |         |         |                     |                     |
	|         | --driver=docker                                            |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:31:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:31:47.372197  278286 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:47.372417  278286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:47.372427  278286 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:47.372431  278286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:47.372606  278286 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:31:47.373198  278286 out.go:303] Setting JSON to false
	I0108 21:31:47.374571  278286 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4457,"bootTime":1673209051,"procs":559,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:47.374633  278286 start.go:135] virtualization: kvm guest
	I0108 21:31:47.377099  278286 out.go:177] * [no-preload-211859] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:47.378501  278286 notify.go:220] Checking for updates...
	I0108 21:31:47.380024  278286 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:31:47.381860  278286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:47.383393  278286 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:47.384839  278286 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:31:47.386173  278286 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:47.387871  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:31:47.388286  278286 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:31:47.418011  278286 docker.go:137] docker version: linux-20.10.22
	I0108 21:31:47.418111  278286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:47.514291  278286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:47.43856006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:47.514399  278286 docker.go:254] overlay module found
	I0108 21:31:47.517594  278286 out.go:177] * Using the docker driver based on existing profile
	I0108 21:31:47.519152  278286 start.go:294] selected driver: docker
	I0108 21:31:47.519173  278286 start.go:838] validating driver "docker" against &{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:47.519311  278286 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:47.520298  278286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:47.620459  278286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:47.542696624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:47.620698  278286 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:47.620723  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:31:47.620731  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:47.620745  278286 start_flags.go:317] config:
	{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:47.623862  278286 out.go:177] * Starting control plane node no-preload-211859 in cluster no-preload-211859
	I0108 21:31:47.625336  278286 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:31:47.626861  278286 out.go:177] * Pulling base image ...
	I0108 21:31:47.628400  278286 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:31:47.628429  278286 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:31:47.628561  278286 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:31:47.628618  278286 cache.go:107] acquiring lock: {Name:mka4eae081deb9dc030a8e6d208cdbfc375fedd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628650  278286 cache.go:107] acquiring lock: {Name:mk5f6bff7f6f0a24f6225496f42d8e8e28b27999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628705  278286 cache.go:107] acquiring lock: {Name:mk5f9a0ef25a028cc0da95c581faa4f8582f8133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628775  278286 cache.go:107] acquiring lock: {Name:mk240cd96639812e2ee7ab4caa38c9f49d9f4169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628774  278286 cache.go:107] acquiring lock: {Name:mk09e8a53a311c6d58c16c85cb6a7a373e3c68b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628784  278286 cache.go:107] acquiring lock: {Name:mk1ba37dc36f668cc1aa7c0cabe840314426c4d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628812  278286 cache.go:107] acquiring lock: {Name:mka15fcca44dc28e79d1a5c07b3e2caf71bae5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628787  278286 cache.go:107] acquiring lock: {Name:mkcc5294a2af912a919e5a940c540341ff897a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628907  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0108 21:31:47.628928  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists
	I0108 21:31:47.628933  278286 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 121.663µs
	I0108 21:31:47.628938  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists
	I0108 21:31:47.628946  278286 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0108 21:31:47.628906  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:31:47.628955  278286 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 317.677µs
	I0108 21:31:47.628958  278286 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 270.333µs
	I0108 21:31:47.628967  278286 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded
	I0108 21:31:47.628967  278286 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded
	I0108 21:31:47.628967  278286 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 355.563µs
	I0108 21:31:47.628976  278286 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:31:47.628993  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists
	I0108 21:31:47.629015  278286 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 352.452µs
	I0108 21:31:47.629027  278286 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded
	I0108 21:31:47.629038  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists
	I0108 21:31:47.629049  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0108 21:31:47.629056  278286 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 323.146µs
	I0108 21:31:47.629064  278286 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded
	I0108 21:31:47.629070  278286 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 297.994µs
	I0108 21:31:47.629088  278286 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0108 21:31:47.629051  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0108 21:31:47.629102  278286 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 330.46µs
	I0108 21:31:47.629116  278286 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0108 21:31:47.629122  278286 cache.go:87] Successfully saved all images to host disk.
	I0108 21:31:47.652827  278286 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:31:47.652851  278286 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:31:47.652870  278286 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:31:47.652900  278286 start.go:364] acquiring machines lock for no-preload-211859: {Name:mk421f625ba7c0f468447c7930aeee12b4ccfc5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.653003  278286 start.go:368] acquired machines lock for "no-preload-211859" in 85.079µs
	I0108 21:31:47.653019  278286 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:47.653023  278286 fix.go:55] fixHost starting: 
	I0108 21:31:47.653231  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:31:47.676820  278286 fix.go:103] recreateIfNeeded on no-preload-211859: state=Stopped err=<nil>
	W0108 21:31:47.676850  278286 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:31:47.679056  278286 out.go:177] * Restarting existing docker container for "no-preload-211859" ...
	I0108 21:31:44.891244  274657 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0108 21:31:46.398419  274657 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0108 21:31:47.476145  274657 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0108 21:31:49.350616  274657 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0108 21:31:47.680453  278286 cli_runner.go:164] Run: docker start no-preload-211859
	I0108 21:31:48.055774  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:31:48.081772  278286 kic.go:415] container "no-preload-211859" state is running.
	I0108 21:31:48.082176  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:48.106752  278286 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:31:48.106996  278286 machine.go:88] provisioning docker machine ...
	I0108 21:31:48.107026  278286 ubuntu.go:169] provisioning hostname "no-preload-211859"
	I0108 21:31:48.107073  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:48.132199  278286 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:48.132389  278286 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0108 21:31:48.132411  278286 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-211859 && echo "no-preload-211859" | sudo tee /etc/hostname
	I0108 21:31:48.133075  278286 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53914->127.0.0.1:33052: read: connection reset by peer
	I0108 21:31:51.259690  278286 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-211859
	
	I0108 21:31:51.259765  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.287159  278286 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:51.287325  278286 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0108 21:31:51.287351  278286 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-211859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-211859/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-211859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:31:51.403424  278286 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:31:51.403455  278286 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:31:51.403534  278286 ubuntu.go:177] setting up certificates
	I0108 21:31:51.403545  278286 provision.go:83] configureAuth start
	I0108 21:31:51.403600  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:51.427972  278286 provision.go:138] copyHostCerts
	I0108 21:31:51.428030  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:31:51.428040  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:31:51.428108  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:31:51.428200  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:31:51.428212  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:31:51.428241  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:31:51.428291  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:31:51.428298  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:31:51.428324  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:31:51.428366  278286 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.no-preload-211859 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-211859]
	I0108 21:31:51.573024  278286 provision.go:172] copyRemoteCerts
	I0108 21:31:51.573080  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:31:51.573115  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.597019  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.682658  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:31:51.699465  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:31:51.716152  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:31:51.732857  278286 provision.go:86] duration metric: configureAuth took 329.295378ms
	I0108 21:31:51.732886  278286 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:31:51.733029  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:31:51.733040  278286 machine.go:91] provisioned docker machine in 3.626026428s
	I0108 21:31:51.733046  278286 start.go:300] post-start starting for "no-preload-211859" (driver="docker")
	I0108 21:31:51.733052  278286 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:31:51.733093  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:31:51.733143  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.758975  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.842569  278286 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:31:51.845292  278286 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:31:51.845322  278286 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:31:51.845336  278286 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:31:51.845349  278286 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:31:51.845361  278286 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:31:51.845402  278286 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:31:51.845479  278286 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:31:51.845561  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:31:51.851717  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:51.868480  278286 start.go:303] post-start completed in 135.417503ms
	I0108 21:31:51.868534  278286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:31:51.868562  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.892345  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.979939  278286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:31:51.983706  278286 fix.go:57] fixHost completed within 4.330677273s
	I0108 21:31:51.983729  278286 start.go:83] releasing machines lock for "no-preload-211859", held for 4.33071417s
	I0108 21:31:51.983817  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:52.008250  278286 ssh_runner.go:195] Run: cat /version.json
	I0108 21:31:52.008306  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:52.008345  278286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:31:52.008415  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:52.036127  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:52.036559  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:52.148615  278286 ssh_runner.go:195] Run: systemctl --version
	I0108 21:31:52.152487  278286 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:31:52.163721  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:31:52.173278  278286 docker.go:189] disabling docker service ...
	I0108 21:31:52.173325  278286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:31:52.183249  278286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:31:52.192257  278286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:31:52.270587  278286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:31:52.341138  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:31:52.350264  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:31:52.362467  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:31:52.370150  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:31:51.905434  274657 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0108 21:31:52.377936  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:31:52.385834  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:31:52.393630  278286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:31:52.400059  278286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:31:52.406552  278286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:31:52.484476  278286 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:31:52.547909  278286 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:31:52.547978  278286 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:31:52.551296  278286 start.go:472] Will wait 60s for crictl version
	I0108 21:31:52.551354  278286 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:52.578456  278286 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:31:52Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:31:57.042459  274657 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0108 21:32:03.626227  278286 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:03.650433  278286 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:32:03.650513  278286 ssh_runner.go:195] Run: containerd --version
	I0108 21:32:03.673911  278286 ssh_runner.go:195] Run: containerd --version
	I0108 21:32:03.701612  278286 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:32:03.703195  278286 cli_runner.go:164] Run: docker network inspect no-preload-211859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:32:03.727853  278286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0108 21:32:03.731414  278286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:32:03.741350  278286 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:03.741394  278286 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:32:03.765441  278286 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:32:03.765465  278286 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:32:03.765518  278286 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:32:03.789768  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:32:03.789800  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:03.789817  278286 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:32:03.789833  278286 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-211859 NodeName:no-preload-211859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:32:03.789993  278286 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-211859"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:32:03.790112  278286 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-211859 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:32:03.790181  278286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:32:03.797254  278286 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:32:03.797327  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:32:03.804119  278286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0108 21:32:03.816978  278286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:32:03.830009  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0108 21:32:03.844130  278286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:32:03.847152  278286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:32:03.856758  278286 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859 for IP: 192.168.85.2
	I0108 21:32:03.856858  278286 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:32:03.856896  278286 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:32:03.856956  278286 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.key
	I0108 21:32:03.857006  278286 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c
	I0108 21:32:03.857041  278286 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key
	I0108 21:32:03.857131  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:32:03.857160  278286 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:32:03.857173  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:32:03.857196  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:32:03.857224  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:32:03.857244  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:32:03.857279  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:03.857853  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:32:03.877228  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:32:03.894973  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:32:03.912325  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:32:03.929477  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:32:03.946055  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:32:03.962744  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:32:03.979740  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:32:03.996409  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:32:04.012779  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:32:04.029143  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:32:04.045747  278286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:32:04.058662  278286 ssh_runner.go:195] Run: openssl version
	I0108 21:32:04.063563  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:32:04.070705  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.073719  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.073767  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.078393  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:32:04.085125  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:32:04.092323  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.095231  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.095276  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.099886  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:32:04.107081  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:32:04.114108  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.117029  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.117072  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.121793  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:32:04.128357  278286 kubeadm.go:396] StartCluster: {Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:04.128442  278286 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:32:04.128495  278286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:32:04.152477  278286 cri.go:87] found id: "da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	I0108 21:32:04.152498  278286 cri.go:87] found id: "640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6"
	I0108 21:32:04.152505  278286 cri.go:87] found id: "7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1"
	I0108 21:32:04.152511  278286 cri.go:87] found id: "e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659"
	I0108 21:32:04.152516  278286 cri.go:87] found id: "4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf"
	I0108 21:32:04.152523  278286 cri.go:87] found id: "1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42"
	I0108 21:32:04.152528  278286 cri.go:87] found id: ""
	I0108 21:32:04.152561  278286 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:32:04.163354  278286 cri.go:114] JSON = null
	W0108 21:32:04.163405  278286 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:32:04.163457  278286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:32:04.169935  278286 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:32:04.169956  278286 kubeadm.go:627] restartCluster start
	I0108 21:32:04.169988  278286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:32:04.176496  278286 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.177334  278286 kubeconfig.go:135] verify returned: extract IP: "no-preload-211859" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:04.177774  278286 kubeconfig.go:146] "no-preload-211859" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:32:04.178473  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:32:04.179892  278286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:32:04.186632  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.186676  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.195110  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.395513  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.395582  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.404046  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.595266  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.595346  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.603669  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.795951  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.796019  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.804763  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.996094  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.996191  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.004793  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.196080  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.196146  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.204564  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.395860  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.395951  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.404477  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.595811  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.595891  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.604562  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.795835  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.795898  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.804403  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.995694  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.995762  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.004274  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.195535  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.195616  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.204305  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.395611  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.395692  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.404197  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.595519  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.595606  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.604401  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.795696  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.795764  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.804957  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.995206  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.995292  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.004148  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.195361  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:07.195428  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.204056  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.204077  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:07.204110  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.212048  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.212079  278286 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:32:07.212087  278286 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:32:07.212099  278286 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:32:07.212145  278286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:32:07.235576  278286 cri.go:87] found id: "da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	I0108 21:32:07.235604  278286 cri.go:87] found id: "640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6"
	I0108 21:32:07.235616  278286 cri.go:87] found id: "7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1"
	I0108 21:32:07.235626  278286 cri.go:87] found id: "e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659"
	I0108 21:32:07.235636  278286 cri.go:87] found id: "4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf"
	I0108 21:32:07.235650  278286 cri.go:87] found id: "1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42"
	I0108 21:32:07.235665  278286 cri.go:87] found id: ""
	I0108 21:32:07.235675  278286 cri.go:232] Stopping containers: [da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c 640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6 7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1 e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659 4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf 1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42]
	I0108 21:32:07.235717  278286 ssh_runner.go:195] Run: which crictl
	I0108 21:32:07.238503  278286 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c 640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6 7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1 e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659 4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf 1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42
	I0108 21:32:07.262960  278286 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:32:07.272749  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:32:07.279614  278286 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 21:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 21:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:19 /etc/kubernetes/scheduler.conf
	
	I0108 21:32:07.279671  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:32:07.286115  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:32:07.292656  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:32:07.299126  278286 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.299194  278286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:32:07.305509  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:32:07.312247  278286 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.312297  278286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:32:07.318608  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:07.325306  278286 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:07.325326  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:07.369488  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:06.804972  274657 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0108 21:32:08.118233  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.253244  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.303991  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.412623  278286 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:32:08.412743  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:08.921962  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:09.421918  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:09.434129  278286 api_server.go:71] duration metric: took 1.021506771s to wait for apiserver process to appear ...
	I0108 21:32:09.434161  278286 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:32:09.434173  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:09.434545  278286 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0108 21:32:09.935273  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:12.725708  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:32:12.725738  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:32:12.935144  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:12.939566  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:32:12.939597  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:32:13.435040  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:13.439568  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:32:13.439591  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:32:13.934877  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:13.939903  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0108 21:32:13.945633  278286 api_server.go:140] control plane version: v1.25.3
	I0108 21:32:13.945662  278286 api_server.go:130] duration metric: took 4.511494879s to wait for apiserver health ...
	I0108 21:32:13.945673  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:32:13.945681  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:13.948245  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:32:13.949871  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:32:13.953423  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:32:13.953439  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:32:13.966338  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:32:14.826804  278286 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:32:14.833621  278286 system_pods.go:59] 9 kube-system pods found
	I0108 21:32:14.833651  278286 system_pods.go:61] "coredns-565d847f94-jw8vf" [273a87b0-0dde-4637-b287-732fde04519d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833659  278286 system_pods.go:61] "etcd-no-preload-211859" [ce7270e1-24af-4c4b-9e07-7c30d4743484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:32:14.833668  278286 system_pods.go:61] "kindnet-vh4hl" [c002c329-15ad-4066-8f90-bee3d9d18431] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:32:14.833673  278286 system_pods.go:61] "kube-apiserver-no-preload-211859" [3542f7bf-5681-4ded-a281-872f51789333] Running
	I0108 21:32:14.833682  278286 system_pods.go:61] "kube-controller-manager-no-preload-211859" [44859af0-ff02-4470-9f28-d6952d195bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:32:14.833690  278286 system_pods.go:61] "kube-proxy-zb6wz" [8da901e0-be84-453e-895c-7b0b2c60bc76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:32:14.833697  278286 system_pods.go:61] "kube-scheduler-no-preload-211859" [3f953e75-f501-4cef-83cf-e39f1cab3b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:32:14.833707  278286 system_pods.go:61] "metrics-server-5c8fd5cf8-cr777" [92f4ef12-2c95-4b70-b116-f8552a32416e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833716  278286 system_pods.go:61] "storage-provisioner" [05464a1d-53d5-4d21-a5a3-3453e21df72a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833721  278286 system_pods.go:74] duration metric: took 6.897553ms to wait for pod list to return data ...
	I0108 21:32:14.833731  278286 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:32:14.836514  278286 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:32:14.836540  278286 node_conditions.go:123] node cpu capacity is 8
	I0108 21:32:14.836552  278286 node_conditions.go:105] duration metric: took 2.81613ms to run NodePressure ...
	I0108 21:32:14.836572  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:14.970125  278286 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:32:14.974329  278286 kubeadm.go:778] kubelet initialised
	I0108 21:32:14.974351  278286 kubeadm.go:779] duration metric: took 4.202323ms waiting for restarted kubelet to initialise ...
	I0108 21:32:14.974360  278286 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:32:14.979113  278286 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	I0108 21:32:16.985255  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:19.485224  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:21.485328  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:23.485383  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:25.984908  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:25.746598  274657 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0108 21:32:28.484769  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:30.985383  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:33.485160  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:35.985632  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:38.485442  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:40.985209  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	852d56656c585       d6e3e26021b60       3 minutes ago       Exited              kindnet-cni               3                   ec976b233877d
	7bd93fc5f6581       beaaf00edd38a       12 minutes ago      Running             kube-proxy                0                   024e28d63934a
	26d1b1e130787       6d23ec0e8b87e       12 minutes ago      Running             kube-scheduler            0                   4dc05b9437d19
	581d92e607165       0346dbd74bcb9       12 minutes ago      Running             kube-apiserver            0                   72e3dc94d266d
	e519152964881       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   559e4f8929fdb
	b7739474207ce       6039992312758       12 minutes ago      Running             kube-controller-manager   0                   88b0b0b5461c4
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:20:01 UTC, end at Sun 2023-01-08 21:32:43 UTC. --
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.162205507Z" level=warning msg="cleaning up after shim disconnected" id=26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f namespace=k8s.io
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.162217099Z" level=info msg="cleaning up dead shim"
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.171612213Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:26:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2490 runtime=io.containerd.runc.v2\n"
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.872453283Z" level=info msg="RemoveContainer for \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\""
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.877780611Z" level=info msg="RemoveContainer for \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\" returns successfully"
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.178001178Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.190619294Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\""
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.191299761Z" level=info msg="StartContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\""
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.330984005Z" level=info msg="StartContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\" returns successfully"
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.852488648Z" level=info msg="shim disconnected" id=87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.852561258Z" level=warning msg="cleaning up after shim disconnected" id=87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e namespace=k8s.io
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.852582223Z" level=info msg="cleaning up dead shim"
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.861162326Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:28:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2603 runtime=io.containerd.runc.v2\n"
	Jan 08 21:28:57 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:57.190868018Z" level=info msg="RemoveContainer for \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\""
	Jan 08 21:28:57 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:57.196069132Z" level=info msg="RemoveContainer for \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\" returns successfully"
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.177608824Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.191055380Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f\""
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.191581615Z" level=info msg="StartContainer for \"852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f\""
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.328896629Z" level=info msg="StartContainer for \"852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f\" returns successfully"
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.764871180Z" level=info msg="shim disconnected" id=852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.764933025Z" level=warning msg="cleaning up after shim disconnected" id=852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f namespace=k8s.io
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.764948642Z" level=info msg="cleaning up dead shim"
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.774147090Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:32:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2720 runtime=io.containerd.runc.v2\n"
	Jan 08 21:32:02 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:02.519156519Z" level=info msg="RemoveContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\""
	Jan 08 21:32:02 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:02.523934846Z" level=info msg="RemoveContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-211952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-211952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=default-k8s-diff-port-211952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_20_27_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:20:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-211952
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-diff-port-211952
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                fe5ecc0a-a17f-4998-8022-5b0438ac303f
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-211952                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-52cqk                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-211952             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-211952    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hz8lw                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-211952             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node default-k8s-diff-port-211952 event: Registered Node default-k8s-diff-port-211952 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa] <==
	* {"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-diff-port-211952 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:20:20.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-01-08T21:26:44.716Z","caller":"traceutil/trace.go:171","msg":"trace[1467103032] linearizableReadLoop","detail":"{readStateIndex:587; appliedIndex:587; }","duration":"154.212936ms","start":"2023-01-08T21:26:44.562Z","end":"2023-01-08T21:26:44.716Z","steps":["trace[1467103032] 'read index received'  (duration: 154.202833ms)","trace[1467103032] 'applied index is now lower than readState.Index'  (duration: 8.591µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:26:44.716Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.368568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2023-01-08T21:26:44.717Z","caller":"traceutil/trace.go:171","msg":"trace[305862768] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:497; }","duration":"154.464616ms","start":"2023-01-08T21:26:44.562Z","end":"2023-01-08T21:26:44.717Z","steps":["trace[305862768] 'agreement among raft nodes before linearized reading'  (duration: 154.314522ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:30:20.934Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":467}
	{"level":"info","ts":"2023-01-08T21:30:20.935Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":467,"took":"437.744µs"}
	
	* 
	* ==> kernel <==
	*  21:32:43 up  1:15,  0 users,  load average: 0.50, 0.65, 1.16
	Linux default-k8s-diff-port-211952 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d] <==
	* I0108 21:20:23.209769       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:20:23.209857       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:20:23.210244       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:20:23.210328       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:20:23.215976       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:20:23.218195       1 controller.go:616] quota admission added evaluator for: namespaces
	I0108 21:20:23.254453       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:20:23.310287       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:20:23.838717       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:20:24.058948       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:20:24.061828       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:20:24.061850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:20:24.399270       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:20:24.428887       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:20:24.527386       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0108 21:20:24.532706       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0108 21:20:24.533803       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:20:24.537243       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:20:25.141317       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:20:25.989727       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:20:25.999258       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0108 21:20:26.006195       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:20:26.084892       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:20:38.698379       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:20:38.849178       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d] <==
	* I0108 21:20:37.996090       1 shared_informer.go:262] Caches are synced for crt configmap
	I0108 21:20:38.001667       1 shared_informer.go:262] Caches are synced for node
	I0108 21:20:38.001690       1 range_allocator.go:166] Starting range CIDR allocator
	I0108 21:20:38.001704       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0108 21:20:38.001715       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0108 21:20:38.006243       1 range_allocator.go:367] Set node default-k8s-diff-port-211952 PodCIDR to [10.244.0.0/24]
	I0108 21:20:38.017669       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 21:20:38.042143       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 21:20:38.045285       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:20:38.090843       1 shared_informer.go:262] Caches are synced for deployment
	I0108 21:20:38.090843       1 shared_informer.go:262] Caches are synced for disruption
	I0108 21:20:38.141757       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:20:38.143906       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 21:20:38.182717       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0108 21:20:38.200627       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:20:38.504455       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:20:38.504480       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:20:38.519923       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:20:38.705861       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hz8lw"
	I0108 21:20:38.708993       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-52cqk"
	I0108 21:20:38.851131       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0108 21:20:39.000111       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-fd94f"
	I0108 21:20:39.004180       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-w786w"
	I0108 21:20:39.370532       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0108 21:20:39.379431       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-w786w"
	
	* 
	* ==> kube-proxy [7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc] <==
	* I0108 21:20:39.252698       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0108 21:20:39.252848       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0108 21:20:39.252879       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:20:39.273356       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:20:39.273390       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:20:39.273401       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:20:39.273419       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:20:39.273461       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:20:39.273614       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:20:39.273852       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:20:39.273873       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:20:39.274443       1 config.go:317] "Starting service config controller"
	I0108 21:20:39.274469       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:20:39.274476       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:20:39.274496       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:20:39.274534       1 config.go:444] "Starting node config controller"
	I0108 21:20:39.274554       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:20:39.375304       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 21:20:39.375333       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:20:39.375369       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225] <==
	* W0108 21:20:23.231531       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:20:23.235629       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:20:23.231729       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:20:23.235656       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:20:23.231874       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:20:23.235675       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:20:23.233627       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:23.235694       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:20:23.233737       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:20:23.235714       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:20:23.233741       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:20:23.235733       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:20:23.234883       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:23.235751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:20:24.073855       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:20:24.073894       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:20:24.079980       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:20:24.080020       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:20:24.108284       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:20:24.108322       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:20:24.169681       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:20:24.169717       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:20:24.247187       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:24.247220       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0108 21:20:26.327263       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:20:01 UTC, end at Sun 2023-01-08 21:32:43 UTC. --
	Jan 08 21:31:26 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:26.528210    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:31 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:31.529896    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:36 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:36.530888    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:41 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:41.532636    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:46 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:46.534449    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:51 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:51.535841    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:56 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:56.536766    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:01 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:01.538197    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:02 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:02.517994    1322 scope.go:115] "RemoveContainer" containerID="87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e"
	Jan 08 21:32:02 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:02.518396    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:02 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:02.518789    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:06 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:06.539738    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:11 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:11.540451    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:14 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:14.175015    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:14 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:14.176349    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:16 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:16.541451    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:21 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:21.542116    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:26 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:26.542898    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:28 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:28.174826    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:28 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:28.175088    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:31 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:31.544594    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:36 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:36.545826    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:39 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:39.175095    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:39 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:39.175457    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:41 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:41.546536    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-fd94f storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe pod busybox coredns-565d847f94-fd94f storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 describe pod busybox coredns-565d847f94-fd94f storage-provisioner: exit status 1 (67.171618ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrnzp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-vrnzp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m47s (x2 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-fd94f" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-diff-port-211952 describe pod busybox coredns-565d847f94-fd94f storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-211952
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-211952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a",
	        "Created": "2023-01-08T21:20:01.150415833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:20:01.544064591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hostname",
	        "HostsPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hosts",
	        "LogPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a-json.log",
	        "Name": "/default-k8s-diff-port-211952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-211952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-211952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-211952",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-211952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-211952",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2bf33dbb62611d9560108b1c0a529546771fed3ac5d99ff62eef897f847b173",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c2bf33dbb626",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-211952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "553ec1d733bb",
	                        "default-k8s-diff-port-211952"
	                    ],
	                    "NetworkID": "dac77270e17703c586bb819b54d2f7262cc084b9a2efd9432712b1970a60294f",
	                    "EndpointID": "3d04b80dad9440ee7c222e1d09648b2670e8c59dcc8578b2d8550cd138b1734d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-211952 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-211950                     | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:21 UTC | 08 Jan 23 21:26 UTC |
	|         | --memory=2200                                              |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	| ssh     | -p embed-certs-211950 sudo                                 | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | crictl images -o json                                      |                        |         |         |                     |                     |
	| pause   | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| unpause | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950     | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                        |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                        |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                        |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                        |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                        |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                        |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                        |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                        |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                        |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                        |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639      | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828 | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                        |         |         |                     |                     |
	|         | --kvm-network=default                                      |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                        |         |         |                     |                     |
	|         | --keep-context=false                                       |                        |         |         |                     |                     |
	|         | --driver=docker                                            |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                        |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                        |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859      | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                        |         |         |                     |                     |
	|         | --alsologtostderr                                          |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                        |         |         |                     |                     |
	|         | --driver=docker                                            |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                        |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:31:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:31:47.372197  278286 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:47.372417  278286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:47.372427  278286 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:47.372431  278286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:47.372606  278286 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:31:47.373198  278286 out.go:303] Setting JSON to false
	I0108 21:31:47.374571  278286 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4457,"bootTime":1673209051,"procs":559,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:47.374633  278286 start.go:135] virtualization: kvm guest
	I0108 21:31:47.377099  278286 out.go:177] * [no-preload-211859] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:47.378501  278286 notify.go:220] Checking for updates...
	I0108 21:31:47.380024  278286 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:31:47.381860  278286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:47.383393  278286 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:47.384839  278286 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:31:47.386173  278286 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:47.387871  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:31:47.388286  278286 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:31:47.418011  278286 docker.go:137] docker version: linux-20.10.22
	I0108 21:31:47.418111  278286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:47.514291  278286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:47.43856006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:47.514399  278286 docker.go:254] overlay module found
	I0108 21:31:47.517594  278286 out.go:177] * Using the docker driver based on existing profile
	I0108 21:31:47.519152  278286 start.go:294] selected driver: docker
	I0108 21:31:47.519173  278286 start.go:838] validating driver "docker" against &{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:47.519311  278286 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:47.520298  278286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:47.620459  278286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:47.542696624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:47.620698  278286 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:47.620723  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:31:47.620731  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:47.620745  278286 start_flags.go:317] config:
	{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:47.623862  278286 out.go:177] * Starting control plane node no-preload-211859 in cluster no-preload-211859
	I0108 21:31:47.625336  278286 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:31:47.626861  278286 out.go:177] * Pulling base image ...
	I0108 21:31:47.628400  278286 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:31:47.628429  278286 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:31:47.628561  278286 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:31:47.628618  278286 cache.go:107] acquiring lock: {Name:mka4eae081deb9dc030a8e6d208cdbfc375fedd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628650  278286 cache.go:107] acquiring lock: {Name:mk5f6bff7f6f0a24f6225496f42d8e8e28b27999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628705  278286 cache.go:107] acquiring lock: {Name:mk5f9a0ef25a028cc0da95c581faa4f8582f8133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628775  278286 cache.go:107] acquiring lock: {Name:mk240cd96639812e2ee7ab4caa38c9f49d9f4169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628774  278286 cache.go:107] acquiring lock: {Name:mk09e8a53a311c6d58c16c85cb6a7a373e3c68b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628784  278286 cache.go:107] acquiring lock: {Name:mk1ba37dc36f668cc1aa7c0cabe840314426c4d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628812  278286 cache.go:107] acquiring lock: {Name:mka15fcca44dc28e79d1a5c07b3e2caf71bae5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628787  278286 cache.go:107] acquiring lock: {Name:mkcc5294a2af912a919e5a940c540341ff897a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628907  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0108 21:31:47.628928  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists
	I0108 21:31:47.628933  278286 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 121.663µs
	I0108 21:31:47.628938  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists
	I0108 21:31:47.628946  278286 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0108 21:31:47.628906  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:31:47.628955  278286 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 317.677µs
	I0108 21:31:47.628958  278286 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 270.333µs
	I0108 21:31:47.628967  278286 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded
	I0108 21:31:47.628967  278286 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded
	I0108 21:31:47.628967  278286 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 355.563µs
	I0108 21:31:47.628976  278286 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:31:47.628993  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists
	I0108 21:31:47.629015  278286 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 352.452µs
	I0108 21:31:47.629027  278286 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded
	I0108 21:31:47.629038  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists
	I0108 21:31:47.629049  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0108 21:31:47.629056  278286 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 323.146µs
	I0108 21:31:47.629064  278286 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded
	I0108 21:31:47.629070  278286 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 297.994µs
	I0108 21:31:47.629088  278286 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0108 21:31:47.629051  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0108 21:31:47.629102  278286 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 330.46µs
	I0108 21:31:47.629116  278286 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0108 21:31:47.629122  278286 cache.go:87] Successfully saved all images to host disk.
	I0108 21:31:47.652827  278286 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:31:47.652851  278286 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:31:47.652870  278286 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:31:47.652900  278286 start.go:364] acquiring machines lock for no-preload-211859: {Name:mk421f625ba7c0f468447c7930aeee12b4ccfc5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.653003  278286 start.go:368] acquired machines lock for "no-preload-211859" in 85.079µs
	I0108 21:31:47.653019  278286 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:47.653023  278286 fix.go:55] fixHost starting: 
	I0108 21:31:47.653231  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:31:47.676820  278286 fix.go:103] recreateIfNeeded on no-preload-211859: state=Stopped err=<nil>
	W0108 21:31:47.676850  278286 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:31:47.679056  278286 out.go:177] * Restarting existing docker container for "no-preload-211859" ...
	I0108 21:31:44.891244  274657 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0108 21:31:46.398419  274657 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0108 21:31:47.476145  274657 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0108 21:31:49.350616  274657 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0108 21:31:47.680453  278286 cli_runner.go:164] Run: docker start no-preload-211859
	I0108 21:31:48.055774  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:31:48.081772  278286 kic.go:415] container "no-preload-211859" state is running.
	I0108 21:31:48.082176  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:48.106752  278286 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:31:48.106996  278286 machine.go:88] provisioning docker machine ...
	I0108 21:31:48.107026  278286 ubuntu.go:169] provisioning hostname "no-preload-211859"
	I0108 21:31:48.107073  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:48.132199  278286 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:48.132389  278286 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0108 21:31:48.132411  278286 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-211859 && echo "no-preload-211859" | sudo tee /etc/hostname
	I0108 21:31:48.133075  278286 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53914->127.0.0.1:33052: read: connection reset by peer
	I0108 21:31:51.259690  278286 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-211859
	
	I0108 21:31:51.259765  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.287159  278286 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:51.287325  278286 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0108 21:31:51.287351  278286 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-211859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-211859/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-211859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:31:51.403424  278286 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:31:51.403455  278286 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:31:51.403534  278286 ubuntu.go:177] setting up certificates
	I0108 21:31:51.403545  278286 provision.go:83] configureAuth start
	I0108 21:31:51.403600  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:51.427972  278286 provision.go:138] copyHostCerts
	I0108 21:31:51.428030  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:31:51.428040  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:31:51.428108  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:31:51.428200  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:31:51.428212  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:31:51.428241  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:31:51.428291  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:31:51.428298  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:31:51.428324  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:31:51.428366  278286 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.no-preload-211859 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-211859]
	I0108 21:31:51.573024  278286 provision.go:172] copyRemoteCerts
	I0108 21:31:51.573080  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:31:51.573115  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.597019  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.682658  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:31:51.699465  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:31:51.716152  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:31:51.732857  278286 provision.go:86] duration metric: configureAuth took 329.295378ms
	I0108 21:31:51.732886  278286 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:31:51.733029  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:31:51.733040  278286 machine.go:91] provisioned docker machine in 3.626026428s
	I0108 21:31:51.733046  278286 start.go:300] post-start starting for "no-preload-211859" (driver="docker")
	I0108 21:31:51.733052  278286 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:31:51.733093  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:31:51.733143  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.758975  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.842569  278286 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:31:51.845292  278286 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:31:51.845322  278286 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:31:51.845336  278286 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:31:51.845349  278286 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:31:51.845361  278286 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:31:51.845402  278286 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:31:51.845479  278286 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:31:51.845561  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:31:51.851717  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:51.868480  278286 start.go:303] post-start completed in 135.417503ms
	I0108 21:31:51.868534  278286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:31:51.868562  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.892345  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.979939  278286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:31:51.983706  278286 fix.go:57] fixHost completed within 4.330677273s
	I0108 21:31:51.983729  278286 start.go:83] releasing machines lock for "no-preload-211859", held for 4.33071417s
	I0108 21:31:51.983817  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:52.008250  278286 ssh_runner.go:195] Run: cat /version.json
	I0108 21:31:52.008306  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:52.008345  278286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:31:52.008415  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:52.036127  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:52.036559  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:52.148615  278286 ssh_runner.go:195] Run: systemctl --version
	I0108 21:31:52.152487  278286 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:31:52.163721  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:31:52.173278  278286 docker.go:189] disabling docker service ...
	I0108 21:31:52.173325  278286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:31:52.183249  278286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:31:52.192257  278286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:31:52.270587  278286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:31:52.341138  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:31:52.350264  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:31:52.362467  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:31:52.370150  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:31:51.905434  274657 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0108 21:31:52.377936  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:31:52.385834  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:31:52.393630  278286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:31:52.400059  278286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:31:52.406552  278286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:31:52.484476  278286 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:31:52.547909  278286 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:31:52.547978  278286 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:31:52.551296  278286 start.go:472] Will wait 60s for crictl version
	I0108 21:31:52.551354  278286 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:52.578456  278286 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:31:52Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:31:57.042459  274657 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0108 21:32:03.626227  278286 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:03.650433  278286 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:32:03.650513  278286 ssh_runner.go:195] Run: containerd --version
	I0108 21:32:03.673911  278286 ssh_runner.go:195] Run: containerd --version
	I0108 21:32:03.701612  278286 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:32:03.703195  278286 cli_runner.go:164] Run: docker network inspect no-preload-211859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:32:03.727853  278286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0108 21:32:03.731414  278286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:32:03.741350  278286 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:03.741394  278286 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:32:03.765441  278286 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:32:03.765465  278286 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:32:03.765518  278286 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:32:03.789768  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:32:03.789800  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:03.789817  278286 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:32:03.789833  278286 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-211859 NodeName:no-preload-211859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:32:03.789993  278286 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-211859"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:32:03.790112  278286 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-211859 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:32:03.790181  278286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:32:03.797254  278286 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:32:03.797327  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:32:03.804119  278286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0108 21:32:03.816978  278286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:32:03.830009  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0108 21:32:03.844130  278286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:32:03.847152  278286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:32:03.856758  278286 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859 for IP: 192.168.85.2
	I0108 21:32:03.856858  278286 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:32:03.856896  278286 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:32:03.856956  278286 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.key
	I0108 21:32:03.857006  278286 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c
	I0108 21:32:03.857041  278286 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key
	I0108 21:32:03.857131  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:32:03.857160  278286 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:32:03.857173  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:32:03.857196  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:32:03.857224  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:32:03.857244  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:32:03.857279  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:03.857853  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:32:03.877228  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:32:03.894973  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:32:03.912325  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:32:03.929477  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:32:03.946055  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:32:03.962744  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:32:03.979740  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:32:03.996409  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:32:04.012779  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:32:04.029143  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:32:04.045747  278286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:32:04.058662  278286 ssh_runner.go:195] Run: openssl version
	I0108 21:32:04.063563  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:32:04.070705  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.073719  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.073767  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.078393  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:32:04.085125  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:32:04.092323  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.095231  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.095276  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.099886  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:32:04.107081  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:32:04.114108  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.117029  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.117072  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.121793  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:32:04.128357  278286 kubeadm.go:396] StartCluster: {Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:04.128442  278286 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:32:04.128495  278286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:32:04.152477  278286 cri.go:87] found id: "da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	I0108 21:32:04.152498  278286 cri.go:87] found id: "640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6"
	I0108 21:32:04.152505  278286 cri.go:87] found id: "7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1"
	I0108 21:32:04.152511  278286 cri.go:87] found id: "e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659"
	I0108 21:32:04.152516  278286 cri.go:87] found id: "4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf"
	I0108 21:32:04.152523  278286 cri.go:87] found id: "1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42"
	I0108 21:32:04.152528  278286 cri.go:87] found id: ""
	I0108 21:32:04.152561  278286 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:32:04.163354  278286 cri.go:114] JSON = null
	W0108 21:32:04.163405  278286 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:32:04.163457  278286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:32:04.169935  278286 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:32:04.169956  278286 kubeadm.go:627] restartCluster start
	I0108 21:32:04.169988  278286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:32:04.176496  278286 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.177334  278286 kubeconfig.go:135] verify returned: extract IP: "no-preload-211859" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:04.177774  278286 kubeconfig.go:146] "no-preload-211859" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:32:04.178473  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:32:04.179892  278286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:32:04.186632  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.186676  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.195110  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.395513  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.395582  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.404046  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.595266  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.595346  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.603669  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.795951  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.796019  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.804763  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.996094  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.996191  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.004793  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.196080  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.196146  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.204564  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.395860  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.395951  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.404477  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.595811  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.595891  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.604562  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.795835  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.795898  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.804403  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.995694  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.995762  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.004274  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.195535  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.195616  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.204305  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.395611  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.395692  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.404197  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.595519  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.595606  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.604401  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.795696  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.795764  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.804957  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.995206  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.995292  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.004148  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.195361  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:07.195428  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.204056  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.204077  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:07.204110  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.212048  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.212079  278286 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:32:07.212087  278286 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:32:07.212099  278286 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:32:07.212145  278286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:32:07.235576  278286 cri.go:87] found id: "da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	I0108 21:32:07.235604  278286 cri.go:87] found id: "640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6"
	I0108 21:32:07.235616  278286 cri.go:87] found id: "7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1"
	I0108 21:32:07.235626  278286 cri.go:87] found id: "e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659"
	I0108 21:32:07.235636  278286 cri.go:87] found id: "4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf"
	I0108 21:32:07.235650  278286 cri.go:87] found id: "1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42"
	I0108 21:32:07.235665  278286 cri.go:87] found id: ""
	I0108 21:32:07.235675  278286 cri.go:232] Stopping containers: [da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c 640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6 7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1 e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659 4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf 1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42]
	I0108 21:32:07.235717  278286 ssh_runner.go:195] Run: which crictl
	I0108 21:32:07.238503  278286 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c 640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6 7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1 e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659 4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf 1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42
	I0108 21:32:07.262960  278286 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:32:07.272749  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:32:07.279614  278286 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 21:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 21:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:19 /etc/kubernetes/scheduler.conf
	
	I0108 21:32:07.279671  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:32:07.286115  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:32:07.292656  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:32:07.299126  278286 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.299194  278286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:32:07.305509  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:32:07.312247  278286 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.312297  278286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:32:07.318608  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:07.325306  278286 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:07.325326  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:07.369488  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:06.804972  274657 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0108 21:32:08.118233  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.253244  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.303991  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.412623  278286 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:32:08.412743  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:08.921962  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:09.421918  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:09.434129  278286 api_server.go:71] duration metric: took 1.021506771s to wait for apiserver process to appear ...
	I0108 21:32:09.434161  278286 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:32:09.434173  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:09.434545  278286 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0108 21:32:09.935273  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:12.725708  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:32:12.725738  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:32:12.935144  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:12.939566  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:32:12.939597  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:32:13.435040  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:13.439568  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:32:13.439591  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:32:13.934877  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:13.939903  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0108 21:32:13.945633  278286 api_server.go:140] control plane version: v1.25.3
	I0108 21:32:13.945662  278286 api_server.go:130] duration metric: took 4.511494879s to wait for apiserver health ...
	I0108 21:32:13.945673  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:32:13.945681  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:13.948245  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:32:13.949871  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:32:13.953423  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:32:13.953439  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:32:13.966338  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:32:14.826804  278286 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:32:14.833621  278286 system_pods.go:59] 9 kube-system pods found
	I0108 21:32:14.833651  278286 system_pods.go:61] "coredns-565d847f94-jw8vf" [273a87b0-0dde-4637-b287-732fde04519d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833659  278286 system_pods.go:61] "etcd-no-preload-211859" [ce7270e1-24af-4c4b-9e07-7c30d4743484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:32:14.833668  278286 system_pods.go:61] "kindnet-vh4hl" [c002c329-15ad-4066-8f90-bee3d9d18431] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:32:14.833673  278286 system_pods.go:61] "kube-apiserver-no-preload-211859" [3542f7bf-5681-4ded-a281-872f51789333] Running
	I0108 21:32:14.833682  278286 system_pods.go:61] "kube-controller-manager-no-preload-211859" [44859af0-ff02-4470-9f28-d6952d195bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:32:14.833690  278286 system_pods.go:61] "kube-proxy-zb6wz" [8da901e0-be84-453e-895c-7b0b2c60bc76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:32:14.833697  278286 system_pods.go:61] "kube-scheduler-no-preload-211859" [3f953e75-f501-4cef-83cf-e39f1cab3b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:32:14.833707  278286 system_pods.go:61] "metrics-server-5c8fd5cf8-cr777" [92f4ef12-2c95-4b70-b116-f8552a32416e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833716  278286 system_pods.go:61] "storage-provisioner" [05464a1d-53d5-4d21-a5a3-3453e21df72a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833721  278286 system_pods.go:74] duration metric: took 6.897553ms to wait for pod list to return data ...
	I0108 21:32:14.833731  278286 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:32:14.836514  278286 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:32:14.836540  278286 node_conditions.go:123] node cpu capacity is 8
	I0108 21:32:14.836552  278286 node_conditions.go:105] duration metric: took 2.81613ms to run NodePressure ...
	I0108 21:32:14.836572  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:14.970125  278286 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:32:14.974329  278286 kubeadm.go:778] kubelet initialised
	I0108 21:32:14.974351  278286 kubeadm.go:779] duration metric: took 4.202323ms waiting for restarted kubelet to initialise ...
	I0108 21:32:14.974360  278286 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:32:14.979113  278286 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	I0108 21:32:16.985255  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:19.485224  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:21.485328  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:23.485383  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:25.984908  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:25.746598  274657 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0108 21:32:28.484769  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:30.985383  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:33.485160  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:35.985632  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:38.485442  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:40.985209  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:41.196996  274657 kubeadm.go:778] kubelet initialised
	I0108 21:32:41.197018  274657 kubeadm.go:779] duration metric: took 58.431224474s waiting for restarted kubelet to initialise ...
	I0108 21:32:41.197025  274657 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:32:41.201356  274657 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	I0108 21:32:43.206560  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	852d56656c585       d6e3e26021b60       3 minutes ago       Exited              kindnet-cni               3                   ec976b233877d
	7bd93fc5f6581       beaaf00edd38a       12 minutes ago      Running             kube-proxy                0                   024e28d63934a
	26d1b1e130787       6d23ec0e8b87e       12 minutes ago      Running             kube-scheduler            0                   4dc05b9437d19
	581d92e607165       0346dbd74bcb9       12 minutes ago      Running             kube-apiserver            0                   72e3dc94d266d
	e519152964881       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   559e4f8929fdb
	b7739474207ce       6039992312758       12 minutes ago      Running             kube-controller-manager   0                   88b0b0b5461c4
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:20:01 UTC, end at Sun 2023-01-08 21:32:45 UTC. --
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.162205507Z" level=warning msg="cleaning up after shim disconnected" id=26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f namespace=k8s.io
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.162217099Z" level=info msg="cleaning up dead shim"
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.171612213Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:26:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2490 runtime=io.containerd.runc.v2\n"
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.872453283Z" level=info msg="RemoveContainer for \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\""
	Jan 08 21:26:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:01.877780611Z" level=info msg="RemoveContainer for \"1fa79460d9970b0b01c36d10dcdf208d3b29a541c31eade64c5d1edc1396a415\" returns successfully"
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.178001178Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.190619294Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\""
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.191299761Z" level=info msg="StartContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\""
	Jan 08 21:26:16 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:26:16.330984005Z" level=info msg="StartContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\" returns successfully"
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.852488648Z" level=info msg="shim disconnected" id=87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.852561258Z" level=warning msg="cleaning up after shim disconnected" id=87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e namespace=k8s.io
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.852582223Z" level=info msg="cleaning up dead shim"
	Jan 08 21:28:56 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:56.861162326Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:28:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2603 runtime=io.containerd.runc.v2\n"
	Jan 08 21:28:57 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:57.190868018Z" level=info msg="RemoveContainer for \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\""
	Jan 08 21:28:57 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:28:57.196069132Z" level=info msg="RemoveContainer for \"26b22a04bdb019a098d0f1f44a38a46d29d47cd2436b0841e91973eb9ee5d16f\" returns successfully"
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.177608824Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.191055380Z" level=info msg="CreateContainer within sandbox \"ec976b233877df6d70050f28bed5c493233ac119d07827fd8f061999461e22dc\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f\""
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.191581615Z" level=info msg="StartContainer for \"852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f\""
	Jan 08 21:29:21 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:29:21.328896629Z" level=info msg="StartContainer for \"852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f\" returns successfully"
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.764871180Z" level=info msg="shim disconnected" id=852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.764933025Z" level=warning msg="cleaning up after shim disconnected" id=852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f namespace=k8s.io
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.764948642Z" level=info msg="cleaning up dead shim"
	Jan 08 21:32:01 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:01.774147090Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:32:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2720 runtime=io.containerd.runc.v2\n"
	Jan 08 21:32:02 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:02.519156519Z" level=info msg="RemoveContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\""
	Jan 08 21:32:02 default-k8s-diff-port-211952 containerd[510]: time="2023-01-08T21:32:02.523934846Z" level=info msg="RemoveContainer for \"87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-211952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-211952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=default-k8s-diff-port-211952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_20_27_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:20:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-211952
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:30:48 +0000   Sun, 08 Jan 2023 21:20:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-diff-port-211952
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                fe5ecc0a-a17f-4998-8022-5b0438ac303f
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-211952                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-52cqk                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-211952             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-211952    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hz8lw                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-211952             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node default-k8s-diff-port-211952 event: Registered Node default-k8s-diff-port-211952 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa] <==
	* {"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:20:20.224Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-diff-port-211952 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:20:20.413Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.414Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:20:20.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:20:20.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-01-08T21:26:44.716Z","caller":"traceutil/trace.go:171","msg":"trace[1467103032] linearizableReadLoop","detail":"{readStateIndex:587; appliedIndex:587; }","duration":"154.212936ms","start":"2023-01-08T21:26:44.562Z","end":"2023-01-08T21:26:44.716Z","steps":["trace[1467103032] 'read index received'  (duration: 154.202833ms)","trace[1467103032] 'applied index is now lower than readState.Index'  (duration: 8.591µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T21:26:44.716Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.368568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2023-01-08T21:26:44.717Z","caller":"traceutil/trace.go:171","msg":"trace[305862768] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:497; }","duration":"154.464616ms","start":"2023-01-08T21:26:44.562Z","end":"2023-01-08T21:26:44.717Z","steps":["trace[305862768] 'agreement among raft nodes before linearized reading'  (duration: 154.314522ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T21:30:20.934Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":467}
	{"level":"info","ts":"2023-01-08T21:30:20.935Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":467,"took":"437.744µs"}
	
	* 
	* ==> kernel <==
	*  21:32:45 up  1:15,  0 users,  load average: 0.62, 0.67, 1.16
	Linux default-k8s-diff-port-211952 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d] <==
	* I0108 21:20:23.209769       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:20:23.209857       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:20:23.210244       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:20:23.210328       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:20:23.215976       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:20:23.218195       1 controller.go:616] quota admission added evaluator for: namespaces
	I0108 21:20:23.254453       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:20:23.310287       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:20:23.838717       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:20:24.058948       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:20:24.061828       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:20:24.061850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:20:24.399270       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:20:24.428887       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:20:24.527386       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0108 21:20:24.532706       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0108 21:20:24.533803       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:20:24.537243       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:20:25.141317       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:20:25.989727       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:20:25.999258       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0108 21:20:26.006195       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:20:26.084892       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:20:38.698379       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:20:38.849178       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d] <==
	* I0108 21:20:37.996090       1 shared_informer.go:262] Caches are synced for crt configmap
	I0108 21:20:38.001667       1 shared_informer.go:262] Caches are synced for node
	I0108 21:20:38.001690       1 range_allocator.go:166] Starting range CIDR allocator
	I0108 21:20:38.001704       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0108 21:20:38.001715       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0108 21:20:38.006243       1 range_allocator.go:367] Set node default-k8s-diff-port-211952 PodCIDR to [10.244.0.0/24]
	I0108 21:20:38.017669       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 21:20:38.042143       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 21:20:38.045285       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:20:38.090843       1 shared_informer.go:262] Caches are synced for deployment
	I0108 21:20:38.090843       1 shared_informer.go:262] Caches are synced for disruption
	I0108 21:20:38.141757       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:20:38.143906       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 21:20:38.182717       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0108 21:20:38.200627       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:20:38.504455       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:20:38.504480       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:20:38.519923       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:20:38.705861       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hz8lw"
	I0108 21:20:38.708993       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-52cqk"
	I0108 21:20:38.851131       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0108 21:20:39.000111       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-fd94f"
	I0108 21:20:39.004180       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-w786w"
	I0108 21:20:39.370532       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0108 21:20:39.379431       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-w786w"
	
	* 
	* ==> kube-proxy [7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc] <==
	* I0108 21:20:39.252698       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0108 21:20:39.252848       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0108 21:20:39.252879       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:20:39.273356       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:20:39.273390       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:20:39.273401       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:20:39.273419       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:20:39.273461       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:20:39.273614       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:20:39.273852       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:20:39.273873       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:20:39.274443       1 config.go:317] "Starting service config controller"
	I0108 21:20:39.274469       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:20:39.274476       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:20:39.274496       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:20:39.274534       1 config.go:444] "Starting node config controller"
	I0108 21:20:39.274554       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:20:39.375304       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 21:20:39.375333       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:20:39.375369       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225] <==
	* W0108 21:20:23.231531       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:20:23.235629       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:20:23.231729       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:20:23.235656       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:20:23.231874       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:20:23.235675       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:20:23.233627       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:23.235694       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:20:23.233737       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:20:23.235714       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:20:23.233741       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:20:23.235733       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:20:23.234883       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:23.235751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:20:24.073855       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:20:24.073894       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:20:24.079980       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:20:24.080020       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:20:24.108284       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:20:24.108322       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:20:24.169681       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:20:24.169717       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:20:24.247187       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:20:24.247220       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0108 21:20:26.327263       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:20:01 UTC, end at Sun 2023-01-08 21:32:45 UTC. --
	Jan 08 21:31:26 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:26.528210    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:31 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:31.529896    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:36 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:36.530888    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:41 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:41.532636    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:46 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:46.534449    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:51 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:51.535841    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:31:56 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:31:56.536766    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:01 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:01.538197    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:02 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:02.517994    1322 scope.go:115] "RemoveContainer" containerID="87ea33d1bc34009bc45e4d9df589732a2cf80036232cbdaf54e5167f45e4468e"
	Jan 08 21:32:02 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:02.518396    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:02 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:02.518789    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:06 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:06.539738    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:11 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:11.540451    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:14 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:14.175015    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:14 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:14.176349    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:16 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:16.541451    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:21 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:21.542116    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:26 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:26.542898    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:28 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:28.174826    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:28 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:28.175088    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:31 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:31.544594    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:36 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:36.545826    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:32:39 default-k8s-diff-port-211952 kubelet[1322]: I0108 21:32:39.175095    1322 scope.go:115] "RemoveContainer" containerID="852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	Jan 08 21:32:39 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:39.175457    1322 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-52cqk_kube-system(4ae6659c-e68a-492e-9e3f-5ffb047114c5)\"" pod="kube-system/kindnet-52cqk" podUID=4ae6659c-e68a-492e-9e3f-5ffb047114c5
	Jan 08 21:32:41 default-k8s-diff-port-211952 kubelet[1322]: E0108 21:32:41.546536    1322 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-fd94f storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe pod busybox coredns-565d847f94-fd94f storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 describe pod busybox coredns-565d847f94-fd94f storage-provisioner: exit status 1 (69.322141ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrnzp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-vrnzp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m49s (x2 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-fd94f" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-diff-port-211952 describe pod busybox coredns-565d847f94-fd94f storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (484.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (596.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-211828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-211828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (9m54.091553939s)

                                                
                                                
-- stdout --
	* [old-k8s-version-211828] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-211828 in cluster old-k8s-version-211828
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-211828" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.6.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image k8s.gcr.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:31:14.786818  274657 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:14.787251  274657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:14.787265  274657 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:14.787272  274657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:14.787427  274657 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:31:14.788057  274657 out.go:303] Setting JSON to false
	I0108 21:31:14.789452  274657 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4424,"bootTime":1673209051,"procs":560,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:14.789560  274657 start.go:135] virtualization: kvm guest
	I0108 21:31:14.792273  274657 out.go:177] * [old-k8s-version-211828] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:14.793736  274657 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:31:14.793706  274657 notify.go:220] Checking for updates...
	I0108 21:31:14.796380  274657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:14.797863  274657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:14.799587  274657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:31:14.801298  274657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:14.803317  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:31:14.805219  274657 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0108 21:31:14.806495  274657 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:31:14.836588  274657 docker.go:137] docker version: linux-20.10.22
	I0108 21:31:14.836697  274657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:14.935102  274657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:14.857932215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:14.935250  274657 docker.go:254] overlay module found
	I0108 21:31:14.937603  274657 out.go:177] * Using the docker driver based on existing profile
	I0108 21:31:14.939308  274657 start.go:294] selected driver: docker
	I0108 21:31:14.939320  274657 start.go:838] validating driver "docker" against &{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:14.939425  274657 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:14.940295  274657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:15.037391  274657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:14.960690951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:15.037661  274657 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:15.037690  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:15.037701  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:15.037727  274657 start_flags.go:317] config:
	{Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:15.040192  274657 out.go:177] * Starting control plane node old-k8s-version-211828 in cluster old-k8s-version-211828
	I0108 21:31:15.041641  274657 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:31:15.043001  274657 out.go:177] * Pulling base image ...
	I0108 21:31:15.044447  274657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:31:15.044499  274657 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 21:31:15.044507  274657 cache.go:57] Caching tarball of preloaded images
	I0108 21:31:15.044542  274657 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:31:15.044751  274657 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:31:15.044768  274657 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0108 21:31:15.044879  274657 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:31:15.070621  274657 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:31:15.070646  274657 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:31:15.070659  274657 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:31:15.070696  274657 start.go:364] acquiring machines lock for old-k8s-version-211828: {Name:mk7415b788fbdcf6791633774a550ddef2131776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:15.070786  274657 start.go:368] acquired machines lock for "old-k8s-version-211828" in 67.237µs
	I0108 21:31:15.070803  274657 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:15.070808  274657 fix.go:55] fixHost starting: 
	I0108 21:31:15.071007  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:31:15.094712  274657 fix.go:103] recreateIfNeeded on old-k8s-version-211828: state=Stopped err=<nil>
	W0108 21:31:15.094743  274657 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:31:15.097062  274657 out.go:177] * Restarting existing docker container for "old-k8s-version-211828" ...
	I0108 21:31:15.098676  274657 cli_runner.go:164] Run: docker start old-k8s-version-211828
	I0108 21:31:15.451736  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:31:15.477931  274657 kic.go:415] container "old-k8s-version-211828" state is running.
	I0108 21:31:15.478259  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:15.502791  274657 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/config.json ...
	I0108 21:31:15.503068  274657 machine.go:88] provisioning docker machine ...
	I0108 21:31:15.503092  274657 ubuntu.go:169] provisioning hostname "old-k8s-version-211828"
	I0108 21:31:15.503141  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:15.527135  274657 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:15.527388  274657 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0108 21:31:15.527414  274657 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-211828 && echo "old-k8s-version-211828" | sudo tee /etc/hostname
	I0108 21:31:15.528154  274657 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49092->127.0.0.1:33047: read: connection reset by peer
	I0108 21:31:18.652158  274657 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-211828
	
	I0108 21:31:18.652235  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:18.677352  274657 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:18.677632  274657 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0108 21:31:18.677662  274657 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-211828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-211828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-211828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:31:18.791306  274657 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:31:18.791338  274657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:31:18.791356  274657 ubuntu.go:177] setting up certificates
	I0108 21:31:18.791364  274657 provision.go:83] configureAuth start
	I0108 21:31:18.791407  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:18.815953  274657 provision.go:138] copyHostCerts
	I0108 21:31:18.816006  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:31:18.816012  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:31:18.816081  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:31:18.816177  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:31:18.816185  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:31:18.816212  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:31:18.816273  274657 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:31:18.816281  274657 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:31:18.816304  274657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:31:18.816348  274657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-211828 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-211828]
	I0108 21:31:18.931118  274657 provision.go:172] copyRemoteCerts
	I0108 21:31:18.931183  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:31:18.931217  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:18.955719  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.042817  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:31:19.060612  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:31:19.077223  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:31:19.093605  274657 provision.go:86] duration metric: configureAuth took 302.219123ms
	I0108 21:31:19.093631  274657 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:31:19.093784  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:31:19.093794  274657 machine.go:91] provisioned docker machine in 3.590715689s
	I0108 21:31:19.093801  274657 start.go:300] post-start starting for "old-k8s-version-211828" (driver="docker")
	I0108 21:31:19.093807  274657 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:31:19.093848  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:31:19.093884  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.118184  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.206786  274657 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:31:19.209517  274657 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:31:19.209547  274657 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:31:19.209558  274657 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:31:19.209564  274657 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:31:19.209576  274657 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:31:19.209629  274657 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:31:19.209704  274657 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:31:19.209800  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:31:19.216505  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:19.232916  274657 start.go:303] post-start completed in 139.102319ms
	I0108 21:31:19.232985  274657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:31:19.233025  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.257132  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.339957  274657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:31:19.343759  274657 fix.go:57] fixHost completed within 4.272947567s
	I0108 21:31:19.343776  274657 start.go:83] releasing machines lock for "old-k8s-version-211828", held for 4.272979327s
	I0108 21:31:19.343848  274657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-211828
	I0108 21:31:19.367793  274657 ssh_runner.go:195] Run: cat /version.json
	I0108 21:31:19.367832  274657 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 21:31:19.367913  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.367840  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:31:19.395829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.396770  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:31:19.497144  274657 ssh_runner.go:195] Run: systemctl --version
	I0108 21:31:19.501133  274657 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:31:19.512197  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:31:19.521435  274657 docker.go:189] disabling docker service ...
	I0108 21:31:19.521487  274657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:31:19.530733  274657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:31:19.539679  274657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:31:19.619642  274657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:31:19.693532  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:31:19.702588  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:31:19.714970  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.1"|' -i /etc/containerd/config.toml"
	I0108 21:31:19.723127  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:31:19.730986  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:31:19.738308  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:31:19.746088  274657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:31:19.752009  274657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:31:19.757928  274657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:31:19.836380  274657 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:31:19.899437  274657 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:31:19.899536  274657 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:31:19.903121  274657 start.go:472] Will wait 60s for crictl version
	I0108 21:31:19.903177  274657 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:19.931573  274657 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:31:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:31:30.978568  274657 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:31.001293  274657 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:31:31.001343  274657 ssh_runner.go:195] Run: containerd --version
	I0108 21:31:31.023736  274657 ssh_runner.go:195] Run: containerd --version
	I0108 21:31:31.049215  274657 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.6.10 ...
	I0108 21:31:31.050855  274657 cli_runner.go:164] Run: docker network inspect old-k8s-version-211828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:31:31.072896  274657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0108 21:31:31.076073  274657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:31:31.087169  274657 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0108 21:31:31.088521  274657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 21:31:31.088579  274657 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:31:31.110490  274657 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:31:31.110508  274657 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:31:31.110556  274657 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:31:31.133748  274657 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:31:31.133766  274657 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:31:31.133809  274657 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:31:31.156636  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:31.156662  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:31.156675  274657 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:31:31.156688  274657 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-211828 NodeName:old-k8s-version-211828 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:31:31.156817  274657 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-211828"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-211828
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:31:31.156894  274657 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-211828 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:31:31.156938  274657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 21:31:31.164010  274657 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:31:31.164059  274657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:31:31.170368  274657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0108 21:31:31.182752  274657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:31:31.195402  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0108 21:31:31.207914  274657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:31:31.210710  274657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:31:31.219370  274657 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828 for IP: 192.168.76.2
	I0108 21:31:31.219455  274657 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:31:31.219534  274657 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:31:31.219611  274657 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/client.key
	I0108 21:31:31.219669  274657 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key.31bdca25
	I0108 21:31:31.219701  274657 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key
	I0108 21:31:31.219785  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:31:31.219813  274657 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:31:31.219822  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:31:31.219849  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:31:31.219874  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:31:31.219895  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:31:31.219944  274657 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:31.220509  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:31:31.237015  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:31:31.253867  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:31:31.270214  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/old-k8s-version-211828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:31:31.286736  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:31:31.303748  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:31:31.321340  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:31:31.338473  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:31:31.355647  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:31:31.372647  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:31:31.389808  274657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:31:31.406899  274657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:31:31.419384  274657 ssh_runner.go:195] Run: openssl version
	I0108 21:31:31.424189  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:31:31.431623  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.434625  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.434666  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:31:31.439324  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:31:31.446001  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:31:31.453698  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.456687  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.456735  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:31:31.461571  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:31:31.468289  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:31:31.475322  274657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.478233  274657 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.478271  274657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:31:31.483024  274657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:31:31.489456  274657 kubeadm.go:396] StartCluster: {Name:old-k8s-version-211828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-211828 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:31.489561  274657 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:31:31.489594  274657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:31:31.514364  274657 cri.go:87] found id: "ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a"
	I0108 21:31:31.514386  274657 cri.go:87] found id: "4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25"
	I0108 21:31:31.514401  274657 cri.go:87] found id: "a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70"
	I0108 21:31:31.514407  274657 cri.go:87] found id: "3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4"
	I0108 21:31:31.514412  274657 cri.go:87] found id: "dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d"
	I0108 21:31:31.514419  274657 cri.go:87] found id: "18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330"
	I0108 21:31:31.514424  274657 cri.go:87] found id: ""
	I0108 21:31:31.514460  274657 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:31:31.525493  274657 cri.go:114] JSON = null
	W0108 21:31:31.525551  274657 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:31:31.525611  274657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:31:31.532465  274657 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:31:31.532485  274657 kubeadm.go:627] restartCluster start
	I0108 21:31:31.532526  274657 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:31:31.538695  274657 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.539540  274657 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-211828" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:31.539974  274657 kubeconfig.go:146] "old-k8s-version-211828" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:31:31.540778  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:31:31.542454  274657 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:31:31.548835  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.548878  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.556574  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.756964  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.757026  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.765711  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:31.956987  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:31.957087  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:31.965822  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.157114  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.157204  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.165572  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.356849  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.356932  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.365936  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.557219  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.557301  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.565818  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.757103  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.757202  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.765601  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:32.956833  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:32.956909  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:32.965592  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.156802  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.156864  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.165214  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.357531  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.357620  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.366024  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.557341  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.557432  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.566047  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.757323  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.757407  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.766123  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:33.957421  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:33.957482  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:33.965897  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.157184  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.157255  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.165750  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.357066  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.357148  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.365686  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.556893  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.556978  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.566772  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.566791  274657 api_server.go:165] Checking apiserver status ...
	I0108 21:31:34.566823  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:31:34.574472  274657 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:31:34.574499  274657 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:31:34.574515  274657 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:31:34.574528  274657 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:31:34.574567  274657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:31:34.600377  274657 cri.go:87] found id: "ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a"
	I0108 21:31:34.600401  274657 cri.go:87] found id: "4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25"
	I0108 21:31:34.600411  274657 cri.go:87] found id: "a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70"
	I0108 21:31:34.600422  274657 cri.go:87] found id: "3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4"
	I0108 21:31:34.600432  274657 cri.go:87] found id: "dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d"
	I0108 21:31:34.600445  274657 cri.go:87] found id: "18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330"
	I0108 21:31:34.600455  274657 cri.go:87] found id: ""
	I0108 21:31:34.600466  274657 cri.go:232] Stopping containers: [ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a 4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25 a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70 3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4 dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d 18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330]
	I0108 21:31:34.600511  274657 ssh_runner.go:195] Run: which crictl
	I0108 21:31:34.603393  274657 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop ed44b6cd92e88414d48cecf1685b60dbd2dbe7298b8bcb9b67cba0741120746a 4fdaee2b10f2982887b9869fcd0c1dc5ebe85380068a2c7ca6e308bbd418bf25 a9e20d8377a666867d2e18a3ea12818eaa42542c47d99b0ba20c5f7b3c9a8f70 3baeebbc6da6011661ac440d440193720d7cb3ffc1d6f51175b239cc7994d8d4 dc587e05c9875fe35b86a28d7d5b8fc7bedc7907ec9abcf12c1883d15804ed4d 18030e6256a0f097fd3fd026a18690f5b7e901b5dacd851696eb59d51effb330
	I0108 21:31:34.628109  274657 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:31:34.637971  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:31:34.645121  274657 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jan  8 21:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan  8 21:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan  8 21:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan  8 21:18 /etc/kubernetes/scheduler.conf
	
	I0108 21:31:34.645173  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:31:34.651869  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:31:34.658451  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:31:34.665036  274657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:31:34.671382  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:31:34.677810  274657 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:31:34.677835  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:34.729578  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.487872  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.629926  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.689674  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:35.830162  274657 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:31:35.830228  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.340067  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.839992  274657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:31:36.851033  274657 api_server.go:71] duration metric: took 1.020878979s to wait for apiserver process to appear ...
	I0108 21:31:36.851064  274657 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:31:36.851078  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:36.851443  274657 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0108 21:31:37.352200  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:40.719294  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:40.719336  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:40.852636  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:41.014451  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:41.014482  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:41.352649  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:41.360506  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:41.360537  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:41.852425  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:41.856815  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 21:31:41.856840  274657 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 21:31:42.352410  274657 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0108 21:31:42.357175  274657 api_server.go:278] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0108 21:31:42.364333  274657 api_server.go:140] control plane version: v1.16.0
	I0108 21:31:42.364358  274657 api_server.go:130] duration metric: took 5.513286094s to wait for apiserver health ...
	I0108 21:31:42.364370  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:31:42.364378  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:42.366614  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:31:42.368355  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:31:42.372561  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:31:42.372575  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:31:42.385799  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:31:42.588841  274657 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:31:42.595748  274657 system_pods.go:59] 8 kube-system pods found
	I0108 21:31:42.595796  274657 system_pods.go:61] "coredns-5644d7b6d9-lm49s" [1d1a29e1-72fe-40c1-823a-6d72aa2c076e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0108 21:31:42.595806  274657 system_pods.go:61] "etcd-old-k8s-version-211828" [a29ed263-e80c-4429-bb11-a0a060e0195e] Running
	I0108 21:31:42.595818  274657 system_pods.go:61] "kindnet-9z2n8" [ec80e506-5c07-426a-96b5-39a19c3616de] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:31:42.595836  274657 system_pods.go:61] "kube-apiserver-old-k8s-version-211828" [7f42ca3b-9861-404f-8d3e-b651cd4e5808] Running
	I0108 21:31:42.595847  274657 system_pods.go:61] "kube-controller-manager-old-k8s-version-211828" [5cd01680-3dfa-4113-91f3-4e270c9b328b] Running
	I0108 21:31:42.595860  274657 system_pods.go:61] "kube-proxy-jqh6r" [970fd446-fdef-4fa3-87ea-f9d8ac2776ce] Running
	I0108 21:31:42.595869  274657 system_pods.go:61] "kube-scheduler-old-k8s-version-211828" [bfc7ce16-1608-4207-99d4-4f27c441dba0] Running
	I0108 21:31:42.595880  274657 system_pods.go:61] "storage-provisioner" [481120a6-a2e7-4086-8f77-17761b3efdbb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0108 21:31:42.595891  274657 system_pods.go:74] duration metric: took 7.026711ms to wait for pod list to return data ...
	I0108 21:31:42.595903  274657 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:31:42.598565  274657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:31:42.598616  274657 node_conditions.go:123] node cpu capacity is 8
	I0108 21:31:42.598632  274657 node_conditions.go:105] duration metric: took 2.722133ms to run NodePressure ...
	I0108 21:31:42.598653  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:31:42.765770  274657 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:31:42.769311  274657 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0108 21:31:43.133479  274657 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0108 21:31:43.575452  274657 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0108 21:31:44.106535  274657 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0108 21:31:44.891244  274657 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0108 21:31:46.398419  274657 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0108 21:31:47.476145  274657 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0108 21:31:49.350616  274657 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0108 21:31:51.905434  274657 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0108 21:31:57.042459  274657 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0108 21:32:06.804972  274657 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0108 21:32:25.746598  274657 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0108 21:32:41.196996  274657 kubeadm.go:778] kubelet initialised
	I0108 21:32:41.197018  274657 kubeadm.go:779] duration metric: took 58.431224474s waiting for restarted kubelet to initialise ...
	I0108 21:32:41.197025  274657 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:32:41.201356  274657 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	I0108 21:32:43.206560  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:40.300917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:42.301122  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.301723  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:46.801299  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:48.801395  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:51.301336  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:53.301705  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:55.801251  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.301027  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:00.301463  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:02.801220  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:04.801563  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:07.301530  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:08.802728  274657 node_ready.go:38] duration metric: took 4m0.007692604s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:41:08.805120  274657 out.go:177] 
	W0108 21:41:08.806709  274657 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:08.806733  274657 out.go:239] * 
	* 
	W0108 21:41:08.807656  274657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:08.809434  274657 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-211828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-211828
helpers_test.go:235: (dbg) docker inspect old-k8s-version-211828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9",
	        "Created": "2023-01-08T21:18:34.933200191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:31:15.443918902Z",
	            "FinishedAt": "2023-01-08T21:31:13.76532174Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hosts",
	        "LogPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9-json.log",
	        "Name": "/old-k8s-version-211828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-211828:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-211828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-211828",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-211828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-211828",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "309018aa666998324b7412f25b087ca70d071f695cfc1d9a8c847612c87e3f79",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/309018aa6669",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-211828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f66150df9bfb",
	                        "old-k8s-version-211828"
	                    ],
	                    "NetworkID": "e48a739a7de53b0a2a21ddeaf3e573efe5bbf8c41c6a15cbe1e7c39d0f359d82",
	                    "EndpointID": "eade8242d93b9948df14457042458d9f5c41719567074de6be7d51293c5d2da9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25: (1.130623696s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-211952           | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:32:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:40.233285  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:42.234025  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:40.300917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:42.301122  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.301723  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.733707  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:47.232740  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:46.801299  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:48.801395  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:49.233976  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.733761  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.301336  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:53.301705  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:54.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:56.233841  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:55.801251  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.301027  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.733149  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:01.233702  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:03.233901  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:00.301463  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:02.801220  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:05.733569  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:08.233143  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:04.801563  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:07.301530  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:08.802728  274657 node_ready.go:38] duration metric: took 4m0.007692604s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:41:08.805120  274657 out.go:177] 
	W0108 21:41:08.806709  274657 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:08.806733  274657 out.go:239] * 
	W0108 21:41:08.807656  274657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:08.809434  274657 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	862e5d558b0c1       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   229cf0ebba830
	fabdc3aa883d8       d6e3e26021b60       4 minutes ago        Exited              kindnet-cni               0                   229cf0ebba830
	0bb48abbf3066       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   cfc2e9ff7b2fb
	9b0e57fd243d3       b2756210eeabf       4 minutes ago        Running             etcd                      0                   134c442360b3c
	7458febb17f62       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   d0163a00edc6f
	216117bba57a4       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   110f7899c876b
	34023d0c3e2fc       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   1c7d262754d7c
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:31:15 UTC, end at Sun 2023-01-08 21:41:09 UTC. --
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.983124998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.983140979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.983432660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340 pid=3853 runtime=io.containerd.runc.v2
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.987167645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.987236020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.987251848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:37:07 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:07.987462384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfc2e9ff7b2fba0165b6bebd07bd47d0b0dce4ebc2f96a950ecf2ff1ce8d8279 pid=3870 runtime=io.containerd.runc.v2
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.043594101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wp9ct,Uid:a692c264-6643-470c-91f5-426116336928,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc2e9ff7b2fba0165b6bebd07bd47d0b0dce4ebc2f96a950ecf2ff1ce8d8279\""
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.046253552Z" level=info msg="CreateContainer within sandbox \"cfc2e9ff7b2fba0165b6bebd07bd47d0b0dce4ebc2f96a950ecf2ff1ce8d8279\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.060867362Z" level=info msg="CreateContainer within sandbox \"cfc2e9ff7b2fba0165b6bebd07bd47d0b0dce4ebc2f96a950ecf2ff1ce8d8279\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0bb48abbf3066147aa288f5bdce84119ea74e56fe7ae2cf25ac4776d3cd01e62\""
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.061420360Z" level=info msg="StartContainer for \"0bb48abbf3066147aa288f5bdce84119ea74e56fe7ae2cf25ac4776d3cd01e62\""
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.126655337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-vvlch,Uid:726e82bd-431c-44e0-9ba6-300e9f0997d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\""
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.129432343Z" level=info msg="StartContainer for \"0bb48abbf3066147aa288f5bdce84119ea74e56fe7ae2cf25ac4776d3cd01e62\" returns successfully"
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.130034109Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.142689484Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"fabdc3aa883d84ea4981078e3a4b83d031b470cbf3a91dc5972dfa813c7277b1\""
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.143198991Z" level=info msg="StartContainer for \"fabdc3aa883d84ea4981078e3a4b83d031b470cbf3a91dc5972dfa813c7277b1\""
	Jan 08 21:37:08 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:37:08.329138112Z" level=info msg="StartContainer for \"fabdc3aa883d84ea4981078e3a4b83d031b470cbf3a91dc5972dfa813c7277b1\" returns successfully"
	Jan 08 21:39:48 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:48.864137072Z" level=info msg="shim disconnected" id=fabdc3aa883d84ea4981078e3a4b83d031b470cbf3a91dc5972dfa813c7277b1
	Jan 08 21:39:48 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:48.864202803Z" level=warning msg="cleaning up after shim disconnected" id=fabdc3aa883d84ea4981078e3a4b83d031b470cbf3a91dc5972dfa813c7277b1 namespace=k8s.io
	Jan 08 21:39:48 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:48.864212977Z" level=info msg="cleaning up dead shim"
	Jan 08 21:39:48 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:48.872728930Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:39:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4708 runtime=io.containerd.runc.v2\n"
	Jan 08 21:39:49 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:49.044729063Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jan 08 21:39:49 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:49.057998237Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"862e5d558b0c11f05f02ef9fc1ef81f0678dc4af6cbd49747a0104f86b717fe4\""
	Jan 08 21:39:49 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:49.060828676Z" level=info msg="StartContainer for \"862e5d558b0c11f05f02ef9fc1ef81f0678dc4af6cbd49747a0104f86b717fe4\""
	Jan 08 21:39:49 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:39:49.125860982Z" level=info msg="StartContainer for \"862e5d558b0c11f05f02ef9fc1ef81f0678dc4af6cbd49747a0104f86b717fe4\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-211828
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-211828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=old-k8s-version-211828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:36:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:40:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:40:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:40:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:40:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-211828
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	System Info:
	 Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	 System UUID:                a9413ae7-d165-4b76-a22b-73b89e3e2d6a
	 Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	 Kernel Version:             5.15.0-1025-gcp
	 OS Image:                   Ubuntu 20.04.5 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-211828                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                kindnet-vvlch                                     100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                kube-apiserver-old-k8s-version-211828             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                kube-controller-manager-old-k8s-version-211828    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                kube-proxy-wp9ct                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                kube-scheduler-old-k8s-version-211828             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  Starting                 4m27s                  kubelet, old-k8s-version-211828     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s (x9 over 4m27s)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x7 over 4m27s)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet, old-k8s-version-211828     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m2s                   kube-proxy, old-k8s-version-211828  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [9b0e57fd243d3308c0449b3e7a14258d0eaf8edbdd267eb52c589c56f4035882] <==
	* 2023-01-08 21:36:44.419012 I | etcdserver: initial cluster = old-k8s-version-211828=https://192.168.76.2:2380
	2023-01-08 21:36:44.428849 I | etcdserver: starting member ea7e25599daad906 in cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:36:44.428876 I | raft: ea7e25599daad906 became follower at term 0
	2023-01-08 21:36:44.428883 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-01-08 21:36:44.428886 I | raft: ea7e25599daad906 became follower at term 1
	2023-01-08 21:36:44.432856 W | auth: simple token is not cryptographically signed
	2023-01-08 21:36:44.435013 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-01-08 21:36:44.435460 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-01-08 21:36:44.435729 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:36:44.436906 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-01-08 21:36:44.437018 I | embed: listening for metrics on http://192.168.76.2:2381
	2023-01-08 21:36:44.437064 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-01-08 21:36:44.729178 I | raft: ea7e25599daad906 is starting a new election at term 1
	2023-01-08 21:36:44.729213 I | raft: ea7e25599daad906 became candidate at term 2
	2023-01-08 21:36:44.729231 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2023-01-08 21:36:44.729242 I | raft: ea7e25599daad906 became leader at term 2
	2023-01-08 21:36:44.729249 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2023-01-08 21:36:44.729515 I | etcdserver: setting up the initial cluster version to 3.3
	2023-01-08 21:36:44.730333 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-01-08 21:36:44.730373 I | etcdserver/api: enabled capabilities for version 3.3
	2023-01-08 21:36:44.730403 I | etcdserver: published {Name:old-k8s-version-211828 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:36:44.730413 I | embed: ready to serve client requests
	2023-01-08 21:36:44.730491 I | embed: ready to serve client requests
	2023-01-08 21:36:44.732637 I | embed: serving client requests on 127.0.0.1:2379
	2023-01-08 21:36:44.732810 I | embed: serving client requests on 192.168.76.2:2379
	
	* 
	* ==> kernel <==
	*  21:41:10 up  1:23,  0 users,  load average: 0.18, 0.31, 0.77
	Linux old-k8s-version-211828 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [216117bba57a438567af62bcbd3048094bf895ba1b4696bb7f6074dbbd62f7bb] <==
	* I0108 21:36:50.857875       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:36:51.137706       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 21:36:51.440887       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0108 21:36:51.441554       1 controller.go:606] quota admission added evaluator for: endpoints
	I0108 21:36:52.332543       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0108 21:36:52.884694       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0108 21:36:53.249847       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0108 21:37:07.619579       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:37:07.633038       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0108 21:37:07.715956       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0108 21:37:11.586535       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:37:11.586630       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:37:11.586717       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:37:11.586732       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:38:11.586964       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:38:11.587068       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:38:11.587120       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:38:11.587134       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:40:11.587424       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:40:11.587531       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:40:11.587601       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:40:11.587620       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7458febb17f623ccfde91c37aa81891ecb2f13e33f73798e42f7a457866c74d1] <==
	* E0108 21:37:10.141384       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6d58c4d9b5" failed with pods "dashboard-metrics-scraper-6d58c4d9b5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:10.141383       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6d58c4d9b5", UID:"142ee44c-1268-40a7-9afb-209da0daee73", APIVersion:"apps/v1", ResourceVersion:"446", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6d58c4d9b5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0108 21:37:10.142993       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:10.142990       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"0eb334dc-1a45-44db-a9dc-c7a3d4e3d0eb", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0108 21:37:10.146384       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:10.146375       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"0eb334dc-1a45-44db-a9dc-c7a3d4e3d0eb", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0108 21:37:10.149817       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6d58c4d9b5" failed with pods "dashboard-metrics-scraper-6d58c4d9b5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:10.149812       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6d58c4d9b5", UID:"142ee44c-1268-40a7-9afb-209da0daee73", APIVersion:"apps/v1", ResourceVersion:"446", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6d58c4d9b5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:10.722551       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-7958775c", UID:"531edd17-73d1-419b-9c33-dc92c66066c5", APIVersion:"apps/v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-7958775c-7tzmp
	I0108 21:37:11.214607       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6d58c4d9b5", UID:"142ee44c-1268-40a7-9afb-209da0daee73", APIVersion:"apps/v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6d58c4d9b5-hzcd6
	I0108 21:37:11.215716       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"0eb334dc-1a45-44db-a9dc-c7a3d4e3d0eb", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-t6nsd
	E0108 21:37:38.990663       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:37:40.036394       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:09.242133       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:12.038006       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:39.493613       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:44.039407       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:09.745093       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:16.041001       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:39.996781       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:48.042485       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:10.248220       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:20.043976       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:40.499971       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:52.045375       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0bb48abbf3066147aa288f5bdce84119ea74e56fe7ae2cf25ac4776d3cd01e62] <==
	* W0108 21:37:08.172189       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 21:37:08.178399       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0108 21:37:08.178430       1 server_others.go:149] Using iptables Proxier.
	I0108 21:37:08.178857       1 server.go:529] Version: v1.16.0
	I0108 21:37:08.179577       1 config.go:131] Starting endpoints config controller
	I0108 21:37:08.179599       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 21:37:08.179648       1 config.go:313] Starting service config controller
	I0108 21:37:08.179671       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 21:37:08.279769       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0108 21:37:08.279875       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [34023d0c3e2fc5bef5079f1ef1f9d579d8a9492830d49eeabfbacdba442fea14] <==
	* E0108 21:36:48.214857       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:36:48.214865       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:48.215011       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:36:48.215167       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:36:48.214865       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:48.218773       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:36:48.218868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:36:48.218906       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:36:48.218987       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:36:48.219062       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:48.219087       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:36:49.216126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:36:49.219650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:49.220671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:36:49.221806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:36:49.222909       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:49.224170       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:36:49.225242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:36:49.226271       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:36:49.227413       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:36:49.228824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:49.230035       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:37:07.733901       1 factory.go:585] pod is already present in the activeQ
	E0108 21:37:09.716350       1 factory.go:585] pod is already present in the activeQ
	E0108 21:37:11.314020       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:31:15 UTC, end at Sun 2023-01-08 21:41:10 UTC. --
	Jan 08 21:39:08 old-k8s-version-211828 kubelet[3000]: E0108 21:39:08.776155    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:13 old-k8s-version-211828 kubelet[3000]: E0108 21:39:13.777538    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:18 old-k8s-version-211828 kubelet[3000]: E0108 21:39:18.778376    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:23 old-k8s-version-211828 kubelet[3000]: E0108 21:39:23.779266    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:28 old-k8s-version-211828 kubelet[3000]: E0108 21:39:28.780242    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:33 old-k8s-version-211828 kubelet[3000]: E0108 21:39:33.781009    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:38 old-k8s-version-211828 kubelet[3000]: E0108 21:39:38.782000    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:43 old-k8s-version-211828 kubelet[3000]: E0108 21:39:43.783287    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:48 old-k8s-version-211828 kubelet[3000]: E0108 21:39:48.784082    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:53 old-k8s-version-211828 kubelet[3000]: E0108 21:39:53.784975    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:39:58 old-k8s-version-211828 kubelet[3000]: E0108 21:39:58.785768    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:03 old-k8s-version-211828 kubelet[3000]: E0108 21:40:03.786576    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:08 old-k8s-version-211828 kubelet[3000]: E0108 21:40:08.787313    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:13 old-k8s-version-211828 kubelet[3000]: E0108 21:40:13.788060    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:18 old-k8s-version-211828 kubelet[3000]: E0108 21:40:18.788788    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:23 old-k8s-version-211828 kubelet[3000]: E0108 21:40:23.789569    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:28 old-k8s-version-211828 kubelet[3000]: E0108 21:40:28.790338    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:33 old-k8s-version-211828 kubelet[3000]: E0108 21:40:33.791073    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:38 old-k8s-version-211828 kubelet[3000]: E0108 21:40:38.791792    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:43 old-k8s-version-211828 kubelet[3000]: E0108 21:40:43.792460    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:48 old-k8s-version-211828 kubelet[3000]: E0108 21:40:48.793226    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:53 old-k8s-version-211828 kubelet[3000]: E0108 21:40:53.794028    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:40:58 old-k8s-version-211828 kubelet[3000]: E0108 21:40:58.794762    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:41:03 old-k8s-version-211828 kubelet[3000]: E0108 21:41:03.795598    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:41:08 old-k8s-version-211828 kubelet[3000]: E0108 21:41:08.796392    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-211828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd: exit status 1 (65.243029ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-hk88c" not found
	Error from server (NotFound): pods "metrics-server-7958775c-7tzmp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6d58c4d9b5-hzcd6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-84b68f675b-t6nsd" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (596.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (534.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-211859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:31:59.210345   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:32:13.344565   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:32:39.301228   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-211859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: exit status 80 (8m51.946942315s)

                                                
                                                
-- stdout --
	* [no-preload-211859] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-211859 in cluster no-preload-211859
	* Pulling base image ...
	* Restarting existing docker container for "no-preload-211859" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:31:47.372197  278286 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:47.372417  278286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:47.372427  278286 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:47.372431  278286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:47.372606  278286 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:31:47.373198  278286 out.go:303] Setting JSON to false
	I0108 21:31:47.374571  278286 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4457,"bootTime":1673209051,"procs":559,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:47.374633  278286 start.go:135] virtualization: kvm guest
	I0108 21:31:47.377099  278286 out.go:177] * [no-preload-211859] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:47.378501  278286 notify.go:220] Checking for updates...
	I0108 21:31:47.380024  278286 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:31:47.381860  278286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:47.383393  278286 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:31:47.384839  278286 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:31:47.386173  278286 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:47.387871  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:31:47.388286  278286 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:31:47.418011  278286 docker.go:137] docker version: linux-20.10.22
	I0108 21:31:47.418111  278286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:47.514291  278286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:47.43856006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:47.514399  278286 docker.go:254] overlay module found
	I0108 21:31:47.517594  278286 out.go:177] * Using the docker driver based on existing profile
	I0108 21:31:47.519152  278286 start.go:294] selected driver: docker
	I0108 21:31:47.519173  278286 start.go:838] validating driver "docker" against &{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:47.519311  278286 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:47.520298  278286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:31:47.620459  278286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:31:47.542696624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:31:47.620698  278286 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:47.620723  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:31:47.620731  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:31:47.620745  278286 start_flags.go:317] config:
	{Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:31:47.623862  278286 out.go:177] * Starting control plane node no-preload-211859 in cluster no-preload-211859
	I0108 21:31:47.625336  278286 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:31:47.626861  278286 out.go:177] * Pulling base image ...
	I0108 21:31:47.628400  278286 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:31:47.628429  278286 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:31:47.628561  278286 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:31:47.628618  278286 cache.go:107] acquiring lock: {Name:mka4eae081deb9dc030a8e6d208cdbfc375fedd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628650  278286 cache.go:107] acquiring lock: {Name:mk5f6bff7f6f0a24f6225496f42d8e8e28b27999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628705  278286 cache.go:107] acquiring lock: {Name:mk5f9a0ef25a028cc0da95c581faa4f8582f8133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628775  278286 cache.go:107] acquiring lock: {Name:mk240cd96639812e2ee7ab4caa38c9f49d9f4169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628774  278286 cache.go:107] acquiring lock: {Name:mk09e8a53a311c6d58c16c85cb6a7a373e3c68b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628784  278286 cache.go:107] acquiring lock: {Name:mk1ba37dc36f668cc1aa7c0cabe840314426c4d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628812  278286 cache.go:107] acquiring lock: {Name:mka15fcca44dc28e79d1a5c07b3e2caf71bae5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628787  278286 cache.go:107] acquiring lock: {Name:mkcc5294a2af912a919e5a940c540341ff897a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.628907  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0108 21:31:47.628928  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 exists
	I0108 21:31:47.628933  278286 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 121.663µs
	I0108 21:31:47.628938  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 exists
	I0108 21:31:47.628946  278286 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0108 21:31:47.628906  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:31:47.628955  278286 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3" took 317.677µs
	I0108 21:31:47.628958  278286 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3" took 270.333µs
	I0108 21:31:47.628967  278286 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 succeeded
	I0108 21:31:47.628967  278286 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 succeeded
	I0108 21:31:47.628967  278286 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 355.563µs
	I0108 21:31:47.628976  278286 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:31:47.628993  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 exists
	I0108 21:31:47.629015  278286 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3" took 352.452µs
	I0108 21:31:47.629027  278286 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 succeeded
	I0108 21:31:47.629038  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 exists
	I0108 21:31:47.629049  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0108 21:31:47.629056  278286 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3" took 323.146µs
	I0108 21:31:47.629064  278286 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 succeeded
	I0108 21:31:47.629070  278286 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 297.994µs
	I0108 21:31:47.629088  278286 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0108 21:31:47.629051  278286 cache.go:115] /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0108 21:31:47.629102  278286 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 330.46µs
	I0108 21:31:47.629116  278286 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0108 21:31:47.629122  278286 cache.go:87] Successfully saved all images to host disk.
	I0108 21:31:47.652827  278286 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:31:47.652851  278286 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:31:47.652870  278286 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:31:47.652900  278286 start.go:364] acquiring machines lock for no-preload-211859: {Name:mk421f625ba7c0f468447c7930aeee12b4ccfc5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:47.653003  278286 start.go:368] acquired machines lock for "no-preload-211859" in 85.079µs
	I0108 21:31:47.653019  278286 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:47.653023  278286 fix.go:55] fixHost starting: 
	I0108 21:31:47.653231  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:31:47.676820  278286 fix.go:103] recreateIfNeeded on no-preload-211859: state=Stopped err=<nil>
	W0108 21:31:47.676850  278286 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:31:47.679056  278286 out.go:177] * Restarting existing docker container for "no-preload-211859" ...
	I0108 21:31:47.680453  278286 cli_runner.go:164] Run: docker start no-preload-211859
	I0108 21:31:48.055774  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:31:48.081772  278286 kic.go:415] container "no-preload-211859" state is running.
	I0108 21:31:48.082176  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:48.106752  278286 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/config.json ...
	I0108 21:31:48.106996  278286 machine.go:88] provisioning docker machine ...
	I0108 21:31:48.107026  278286 ubuntu.go:169] provisioning hostname "no-preload-211859"
	I0108 21:31:48.107073  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:48.132199  278286 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:48.132389  278286 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0108 21:31:48.132411  278286 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-211859 && echo "no-preload-211859" | sudo tee /etc/hostname
	I0108 21:31:48.133075  278286 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53914->127.0.0.1:33052: read: connection reset by peer
	I0108 21:31:51.259690  278286 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-211859
	
	I0108 21:31:51.259765  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.287159  278286 main.go:134] libmachine: Using SSH client type: native
	I0108 21:31:51.287325  278286 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0108 21:31:51.287351  278286 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-211859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-211859/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-211859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:31:51.403424  278286 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:31:51.403455  278286 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:31:51.403534  278286 ubuntu.go:177] setting up certificates
	I0108 21:31:51.403545  278286 provision.go:83] configureAuth start
	I0108 21:31:51.403600  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:51.427972  278286 provision.go:138] copyHostCerts
	I0108 21:31:51.428030  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:31:51.428040  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:31:51.428108  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:31:51.428200  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:31:51.428212  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:31:51.428241  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:31:51.428291  278286 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:31:51.428298  278286 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:31:51.428324  278286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:31:51.428366  278286 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.no-preload-211859 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-211859]
	I0108 21:31:51.573024  278286 provision.go:172] copyRemoteCerts
	I0108 21:31:51.573080  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:31:51.573115  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.597019  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.682658  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:31:51.699465  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 21:31:51.716152  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:31:51.732857  278286 provision.go:86] duration metric: configureAuth took 329.295378ms
	I0108 21:31:51.732886  278286 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:31:51.733029  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:31:51.733040  278286 machine.go:91] provisioned docker machine in 3.626026428s
	I0108 21:31:51.733046  278286 start.go:300] post-start starting for "no-preload-211859" (driver="docker")
	I0108 21:31:51.733052  278286 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:31:51.733093  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:31:51.733143  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.758975  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.842569  278286 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:31:51.845292  278286 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:31:51.845322  278286 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:31:51.845336  278286 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:31:51.845349  278286 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:31:51.845361  278286 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:31:51.845402  278286 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:31:51.845479  278286 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:31:51.845561  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:31:51.851717  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:31:51.868480  278286 start.go:303] post-start completed in 135.417503ms
	I0108 21:31:51.868534  278286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:31:51.868562  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:51.892345  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:51.979939  278286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:31:51.983706  278286 fix.go:57] fixHost completed within 4.330677273s
	I0108 21:31:51.983729  278286 start.go:83] releasing machines lock for "no-preload-211859", held for 4.33071417s
	I0108 21:31:51.983817  278286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-211859
	I0108 21:31:52.008250  278286 ssh_runner.go:195] Run: cat /version.json
	I0108 21:31:52.008306  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:52.008345  278286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:31:52.008415  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:31:52.036127  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:52.036559  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:31:52.148615  278286 ssh_runner.go:195] Run: systemctl --version
	I0108 21:31:52.152487  278286 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:31:52.163721  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:31:52.173278  278286 docker.go:189] disabling docker service ...
	I0108 21:31:52.173325  278286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:31:52.183249  278286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:31:52.192257  278286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:31:52.270587  278286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:31:52.341138  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:31:52.350264  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:31:52.362467  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:31:52.370150  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:31:52.377936  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:31:52.385834  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:31:52.393630  278286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:31:52.400059  278286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:31:52.406552  278286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:31:52.484476  278286 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:31:52.547909  278286 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:31:52.547978  278286 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:31:52.551296  278286 start.go:472] Will wait 60s for crictl version
	I0108 21:31:52.551354  278286 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:31:52.578456  278286 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:31:52Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:03.626227  278286 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:03.650433  278286 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:32:03.650513  278286 ssh_runner.go:195] Run: containerd --version
	I0108 21:32:03.673911  278286 ssh_runner.go:195] Run: containerd --version
	I0108 21:32:03.701612  278286 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:32:03.703195  278286 cli_runner.go:164] Run: docker network inspect no-preload-211859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:32:03.727853  278286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0108 21:32:03.731414  278286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:32:03.741350  278286 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:03.741394  278286 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:32:03.765441  278286 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:32:03.765465  278286 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:32:03.765518  278286 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:32:03.789768  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:32:03.789800  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:03.789817  278286 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:32:03.789833  278286 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-211859 NodeName:no-preload-211859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:32:03.789993  278286 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-211859"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:32:03.790112  278286 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-211859 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:32:03.790181  278286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:32:03.797254  278286 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:32:03.797327  278286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:32:03.804119  278286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0108 21:32:03.816978  278286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:32:03.830009  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0108 21:32:03.844130  278286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:32:03.847152  278286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:32:03.856758  278286 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859 for IP: 192.168.85.2
	I0108 21:32:03.856858  278286 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:32:03.856896  278286 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:32:03.856956  278286 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/client.key
	I0108 21:32:03.857006  278286 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key.43b9df8c
	I0108 21:32:03.857041  278286 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key
	I0108 21:32:03.857131  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:32:03.857160  278286 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:32:03.857173  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:32:03.857196  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:32:03.857224  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:32:03.857244  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:32:03.857279  278286 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:03.857853  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:32:03.877228  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:32:03.894973  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:32:03.912325  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/no-preload-211859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:32:03.929477  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:32:03.946055  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:32:03.962744  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:32:03.979740  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:32:03.996409  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:32:04.012779  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:32:04.029143  278286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:32:04.045747  278286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:32:04.058662  278286 ssh_runner.go:195] Run: openssl version
	I0108 21:32:04.063563  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:32:04.070705  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.073719  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.073767  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:32:04.078393  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:32:04.085125  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:32:04.092323  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.095231  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.095276  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:32:04.099886  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:32:04.107081  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:32:04.114108  278286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.117029  278286 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.117072  278286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:32:04.121793  278286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:32:04.128357  278286 kubeadm.go:396] StartCluster: {Name:no-preload-211859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:no-preload-211859 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:04.128442  278286 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:32:04.128495  278286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:32:04.152477  278286 cri.go:87] found id: "da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	I0108 21:32:04.152498  278286 cri.go:87] found id: "640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6"
	I0108 21:32:04.152505  278286 cri.go:87] found id: "7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1"
	I0108 21:32:04.152511  278286 cri.go:87] found id: "e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659"
	I0108 21:32:04.152516  278286 cri.go:87] found id: "4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf"
	I0108 21:32:04.152523  278286 cri.go:87] found id: "1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42"
	I0108 21:32:04.152528  278286 cri.go:87] found id: ""
	I0108 21:32:04.152561  278286 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:32:04.163354  278286 cri.go:114] JSON = null
	W0108 21:32:04.163405  278286 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:32:04.163457  278286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:32:04.169935  278286 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:32:04.169956  278286 kubeadm.go:627] restartCluster start
	I0108 21:32:04.169988  278286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:32:04.176496  278286 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.177334  278286 kubeconfig.go:135] verify returned: extract IP: "no-preload-211859" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:04.177774  278286 kubeconfig.go:146] "no-preload-211859" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:32:04.178473  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:32:04.179892  278286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:32:04.186632  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.186676  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.195110  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.395513  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.395582  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.404046  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.595266  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.595346  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.603669  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.795951  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.796019  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:04.804763  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:04.996094  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:04.996191  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.004793  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.196080  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.196146  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.204564  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.395860  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.395951  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.404477  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.595811  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.595891  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.604562  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.795835  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.795898  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:05.804403  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:05.995694  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:05.995762  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.004274  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.195535  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.195616  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.204305  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.395611  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.395692  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.404197  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.595519  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.595606  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.604401  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.795696  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.795764  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:06.804957  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:06.995206  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:06.995292  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.004148  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.195361  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:07.195428  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.204056  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.204077  278286 api_server.go:165] Checking apiserver status ...
	I0108 21:32:07.204110  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:32:07.212048  278286 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.212079  278286 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:32:07.212087  278286 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:32:07.212099  278286 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:32:07.212145  278286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:32:07.235576  278286 cri.go:87] found id: "da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c"
	I0108 21:32:07.235604  278286 cri.go:87] found id: "640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6"
	I0108 21:32:07.235616  278286 cri.go:87] found id: "7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1"
	I0108 21:32:07.235626  278286 cri.go:87] found id: "e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659"
	I0108 21:32:07.235636  278286 cri.go:87] found id: "4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf"
	I0108 21:32:07.235650  278286 cri.go:87] found id: "1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42"
	I0108 21:32:07.235665  278286 cri.go:87] found id: ""
	I0108 21:32:07.235675  278286 cri.go:232] Stopping containers: [da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c 640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6 7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1 e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659 4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf 1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42]
	I0108 21:32:07.235717  278286 ssh_runner.go:195] Run: which crictl
	I0108 21:32:07.238503  278286 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop da42aae8803ba125eac459e51ffdba9e31efc816be3f5a098069edbbec44fc7c 640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6 7b61203838e946e52bb257036892e21a8137d6b02ae6e307cba917eba43045f1 e5292d3c9357ae424b2211a5576a5c0d1dc2148f92dbb693b2b173d02a43a659 4777a2f6ea154d2e676477c6810e4eebb38bfca013c0990a8605fa7676818ecf 1c6e8899fc497e069140e33049c350dcdfe8bcafcaaba19c4666917216092e42
	I0108 21:32:07.262960  278286 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:32:07.272749  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:32:07.279614  278286 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 21:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 21:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:19 /etc/kubernetes/scheduler.conf
	
	I0108 21:32:07.279671  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 21:32:07.286115  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 21:32:07.292656  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 21:32:07.299126  278286 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.299194  278286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:32:07.305509  278286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 21:32:07.312247  278286 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:32:07.312297  278286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:32:07.318608  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:07.325306  278286 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:07.325326  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:07.369488  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.118233  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.253244  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.303991  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:08.412623  278286 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:32:08.412743  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:08.921962  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:09.421918  278286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:32:09.434129  278286 api_server.go:71] duration metric: took 1.021506771s to wait for apiserver process to appear ...
	I0108 21:32:09.434161  278286 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:32:09.434173  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:09.434545  278286 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0108 21:32:09.935273  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:12.725708  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:32:12.725738  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:32:12.935144  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:12.939566  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:32:12.939597  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:32:13.435040  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:13.439568  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:32:13.439591  278286 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:32:13.934877  278286 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0108 21:32:13.939903  278286 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0108 21:32:13.945633  278286 api_server.go:140] control plane version: v1.25.3
	I0108 21:32:13.945662  278286 api_server.go:130] duration metric: took 4.511494879s to wait for apiserver health ...
	I0108 21:32:13.945673  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:32:13.945681  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:13.948245  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:32:13.949871  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:32:13.953423  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:32:13.953439  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:32:13.966338  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:32:14.826804  278286 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:32:14.833621  278286 system_pods.go:59] 9 kube-system pods found
	I0108 21:32:14.833651  278286 system_pods.go:61] "coredns-565d847f94-jw8vf" [273a87b0-0dde-4637-b287-732fde04519d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833659  278286 system_pods.go:61] "etcd-no-preload-211859" [ce7270e1-24af-4c4b-9e07-7c30d4743484] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:32:14.833668  278286 system_pods.go:61] "kindnet-vh4hl" [c002c329-15ad-4066-8f90-bee3d9d18431] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:32:14.833673  278286 system_pods.go:61] "kube-apiserver-no-preload-211859" [3542f7bf-5681-4ded-a281-872f51789333] Running
	I0108 21:32:14.833682  278286 system_pods.go:61] "kube-controller-manager-no-preload-211859" [44859af0-ff02-4470-9f28-d6952d195bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:32:14.833690  278286 system_pods.go:61] "kube-proxy-zb6wz" [8da901e0-be84-453e-895c-7b0b2c60bc76] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:32:14.833697  278286 system_pods.go:61] "kube-scheduler-no-preload-211859" [3f953e75-f501-4cef-83cf-e39f1cab3b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:32:14.833707  278286 system_pods.go:61] "metrics-server-5c8fd5cf8-cr777" [92f4ef12-2c95-4b70-b116-f8552a32416e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833716  278286 system_pods.go:61] "storage-provisioner" [05464a1d-53d5-4d21-a5a3-3453e21df72a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:32:14.833721  278286 system_pods.go:74] duration metric: took 6.897553ms to wait for pod list to return data ...
	I0108 21:32:14.833731  278286 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:32:14.836514  278286 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:32:14.836540  278286 node_conditions.go:123] node cpu capacity is 8
	I0108 21:32:14.836552  278286 node_conditions.go:105] duration metric: took 2.81613ms to run NodePressure ...
	I0108 21:32:14.836572  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:32:14.970125  278286 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:32:14.974329  278286 kubeadm.go:778] kubelet initialised
	I0108 21:32:14.974351  278286 kubeadm.go:779] duration metric: took 4.202323ms waiting for restarted kubelet to initialise ...
	I0108 21:32:14.974360  278286 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:32:14.979113  278286 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	I0108 21:32:16.985255  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:19.485224  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:21.485328  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:23.485383  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:25.984908  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:28.484769  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:30.985383  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:33.485160  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:35.985632  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:38.485442  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:40.985209  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:43.484751  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:45.985128  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	* 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-211859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-211859
helpers_test.go:235: (dbg) docker inspect no-preload-211859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65",
	        "Created": "2023-01-08T21:19:00.370984432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278593,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:31:48.048620229Z",
	            "FinishedAt": "2023-01-08T21:31:46.405509925Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hosts",
	        "LogPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65-json.log",
	        "Name": "/no-preload-211859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-211859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-211859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-211859",
	                "Source": "/var/lib/docker/volumes/no-preload-211859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-211859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-211859",
	                "name.minikube.sigs.k8s.io": "no-preload-211859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9cfd9ecce7176b07f9c74477aa29aa9c95c26877e9d01e814ddd93bb6301c38",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9cfd9ecce71",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-211859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "23cabd631389",
	                        "no-preload-211859"
	                    ],
	                    "NetworkID": "f6ac14d41355072c0829af36f4aed661fe422e2af93237ea348f6b100ade02e6",
	                    "EndpointID": "37d4278be35398ae25b032f4d4fcc8f365aa4610b071008ea955f6f3bc3face6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-211859 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-211952           | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:32:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	645f5298262ba       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   f6461e142e259
	155ea2a3a27d7       d6e3e26021b60       4 minutes ago        Exited              kindnet-cni               0                   f6461e142e259
	36aefc2fd3ef3       beaaf00edd38a       4 minutes ago        Running             kube-proxy                0                   eec8859c8e251
	77cf9a5ca1193       6d23ec0e8b87e       4 minutes ago        Running             kube-scheduler            2                   f4377fb005063
	7f62da141fb9c       0346dbd74bcb9       4 minutes ago        Running             kube-apiserver            2                   788e0349fea64
	c2c7203594cf0       6039992312758       4 minutes ago        Running             kube-controller-manager   2                   d7254b1559d0f
	a93b9d4e3ea9d       a8a176a5d5d69       4 minutes ago        Running             etcd                      2                   cc1481044f8a0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:31:48 UTC, end at Sun 2023-01-08 21:40:40 UTC. --
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.029798203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.029816999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.030069066Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec8859c8e2510bd9c9d20acea4f79274b152c915dd514fd02ffa63a24aba944 pid=4261 runtime=io.containerd.runc.v2
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.062689892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw9j2,Uid:bb4b7f36-172e-4076-a210-b06c7861c761,Namespace:kube-system,Attempt:0,} returns sandbox id \"eec8859c8e2510bd9c9d20acea4f79274b152c915dd514fd02ffa63a24aba944\""
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.065286770Z" level=info msg="CreateContainer within sandbox \"eec8859c8e2510bd9c9d20acea4f79274b152c915dd514fd02ffa63a24aba944\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.080249726Z" level=info msg="CreateContainer within sandbox \"eec8859c8e2510bd9c9d20acea4f79274b152c915dd514fd02ffa63a24aba944\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36aefc2fd3ef3cf6d2a078c5204b7e0485abb5fca01ffc7a772f74c95e24d1a7\""
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.080846099Z" level=info msg="StartContainer for \"36aefc2fd3ef3cf6d2a078c5204b7e0485abb5fca01ffc7a772f74c95e24d1a7\""
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.154306988Z" level=info msg="StartContainer for \"36aefc2fd3ef3cf6d2a078c5204b7e0485abb5fca01ffc7a772f74c95e24d1a7\" returns successfully"
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.332463372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-4lwd7,Uid:3f1d12aa-f47d-4fcc-85fc-8c24cd90ed73,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\""
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.337594187Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.409892962Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f\""
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.411202931Z" level=info msg="StartContainer for \"155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f\""
	Jan 08 21:36:39 no-preload-211859 containerd[386]: time="2023-01-08T21:36:39.640441063Z" level=info msg="StartContainer for \"155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f\" returns successfully"
	Jan 08 21:37:25 no-preload-211859 containerd[386]: time="2023-01-08T21:37:25.920015076Z" level=error msg="ContainerStatus for \"1ab43e53fcadad949ed00f857a809f6af906752170ff468ca3901f5da843f414\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ab43e53fcadad949ed00f857a809f6af906752170ff468ca3901f5da843f414\": not found"
	Jan 08 21:37:25 no-preload-211859 containerd[386]: time="2023-01-08T21:37:25.920609696Z" level=error msg="ContainerStatus for \"4e84e494a55713049a1cc97191c0e3217baf5d01a50c2b7b648ddc479320d92d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e84e494a55713049a1cc97191c0e3217baf5d01a50c2b7b648ddc479320d92d\": not found"
	Jan 08 21:37:25 no-preload-211859 containerd[386]: time="2023-01-08T21:37:25.921099991Z" level=error msg="ContainerStatus for \"640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"640b6f75f7dac6e4a1b7067eb2fb2c579581463c7ebed4d6c1f077d9f36163c6\": not found"
	Jan 08 21:37:25 no-preload-211859 containerd[386]: time="2023-01-08T21:37:25.921604188Z" level=error msg="ContainerStatus for \"9f96b3767c5e00026898189c244b4f2201fc7a1fc8339fe02aeb94db1a2b4e0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f96b3767c5e00026898189c244b4f2201fc7a1fc8339fe02aeb94db1a2b4e0c\": not found"
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.255149554Z" level=info msg="shim disconnected" id=155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.255225303Z" level=warning msg="cleaning up after shim disconnected" id=155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f namespace=k8s.io
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.255242700Z" level=info msg="cleaning up dead shim"
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.263878444Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4765 runtime=io.containerd.runc.v2\n"
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.399353298Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.413483670Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"645f5298262ba4f6af75a84462916118de4527f05ce2100ceabe82b72e1e8d1d\""
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.414033744Z" level=info msg="StartContainer for \"645f5298262ba4f6af75a84462916118de4527f05ce2100ceabe82b72e1e8d1d\""
	Jan 08 21:39:20 no-preload-211859 containerd[386]: time="2023-01-08T21:39:20.531579844Z" level=info msg="StartContainer for \"645f5298262ba4f6af75a84462916118de4527f05ce2100ceabe82b72e1e8d1d\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-211859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-211859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=no-preload-211859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:36:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-211859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:40:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:36:36 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:36:36 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:36:36 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:36:36 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-211859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                1811e86e-6254-4928-9c37-fe78bdd2d83e
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-211859                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-4lwd7                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-no-preload-211859             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-no-preload-211859    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-dw9j2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-no-preload-211859             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x5 over 4m21s)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x5 over 4m21s)  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x4 over 4m21s)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node no-preload-211859 event: Registered Node no-preload-211859 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [a93b9d4e3ea9d7ddea017392f93e34c2efe6e6f80b028fff2eb8f2985504b8f1] <==
	* {"level":"info","ts":"2023-01-08T21:36:20.011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-01-08T21:36:20.011Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-01-08T21:36:20.013Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-211859 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:36:20.045Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:36:20.045Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:36:20.046Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:36:20.046Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	
	* 
	* ==> kernel <==
	*  21:40:40 up  1:23,  0 users,  load average: 0.17, 0.32, 0.78
	Linux no-preload-211859 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [7f62da141fb9c80fca27bdb124a1de86aea7fc525eac1babc12734cc16fe88b3] <==
	* I0108 21:36:38.647787       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:36:40.649218       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.168.163]
	I0108 21:36:40.950307       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.97.149.141]
	I0108 21:36:40.960642       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.65.170]
	W0108 21:36:41.545136       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:36:41.545193       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:36:41.545201       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:36:41.545238       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:36:41.545327       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:36:41.546453       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:37:41.545353       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:37:41.545396       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:37:41.545402       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:37:41.547634       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:37:41.547681       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:37:41.547688       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:39:41.546241       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:39:41.546293       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:39:41.546304       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:39:41.548352       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:39:41.548424       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:39:41.548437       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [c2c7203594cf0ed2b2f7e9c27a5792035f061a09806bab1e72ef33029e1673f7] <==
	* E0108 21:36:40.829597       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:36:40.831846       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5949f5c576-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0108 21:36:40.831927       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" failed with pods "dashboard-metrics-scraper-5949f5c576-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0108 21:36:40.833145       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:36:40.833197       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0108 21:36:40.839886       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:36:40.839929       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0108 21:36:40.842933       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5949f5c576-6cctw"
	I0108 21:36:40.919680       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-f87d45d87-z6czc"
	E0108 21:37:08.360149       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:37:08.732063       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:37:38.366301       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:37:38.745542       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:08.372552       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:08.756002       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:38.379119       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:38.766289       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:08.385526       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:08.778082       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:38.392223       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:38.789079       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:08.398792       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:08.800525       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:38.405135       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:38.811988       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [36aefc2fd3ef3cf6d2a078c5204b7e0485abb5fca01ffc7a772f74c95e24d1a7] <==
	* I0108 21:36:39.228890       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0108 21:36:39.228966       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0108 21:36:39.228995       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:36:39.315602       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:36:39.315644       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:36:39.315659       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:36:39.315684       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:36:39.315727       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:36:39.315913       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:36:39.316159       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:36:39.316177       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:39.318260       1 config.go:317] "Starting service config controller"
	I0108 21:36:39.318290       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:36:39.318319       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:36:39.318324       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:36:39.319273       1 config.go:444] "Starting node config controller"
	I0108 21:36:39.319285       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:36:39.421155       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:36:39.421225       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:36:39.421371       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [77cf9a5ca119376f6a2d79733c5dc309f991e166abd44a21369d5b7718807cdd] <==
	* E0108 21:36:22.925326       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:36:22.925310       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:36:22.925353       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.926289       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.925817       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:36:22.925982       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:36:22.926353       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:36:22.926060       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:36:22.926390       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:36:22.926102       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:22.926419       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:22.926193       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:36:22.926269       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.926448       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:36:22.926504       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.926529       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:36:23.930685       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:23.930729       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:36:24.002788       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:36:24.002824       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:36:24.003616       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:36:24.003647       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:36:24.036364       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:36:24.036402       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 21:36:25.821609       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:31:48 UTC, end at Sun 2023-01-08 21:40:40 UTC. --
	Jan 08 21:38:41 no-preload-211859 kubelet[3861]: E0108 21:38:41.190168    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:38:46 no-preload-211859 kubelet[3861]: E0108 21:38:46.191713    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:38:51 no-preload-211859 kubelet[3861]: E0108 21:38:51.192971    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:38:56 no-preload-211859 kubelet[3861]: E0108 21:38:56.193831    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:01 no-preload-211859 kubelet[3861]: E0108 21:39:01.194739    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:06 no-preload-211859 kubelet[3861]: E0108 21:39:06.195715    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:11 no-preload-211859 kubelet[3861]: E0108 21:39:11.196617    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:16 no-preload-211859 kubelet[3861]: E0108 21:39:16.197632    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:20 no-preload-211859 kubelet[3861]: I0108 21:39:20.396162    3861 scope.go:115] "RemoveContainer" containerID="155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f"
	Jan 08 21:39:21 no-preload-211859 kubelet[3861]: E0108 21:39:21.199024    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:26 no-preload-211859 kubelet[3861]: E0108 21:39:26.199988    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:31 no-preload-211859 kubelet[3861]: E0108 21:39:31.201748    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:36 no-preload-211859 kubelet[3861]: E0108 21:39:36.203176    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:41 no-preload-211859 kubelet[3861]: E0108 21:39:41.204695    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:46 no-preload-211859 kubelet[3861]: E0108 21:39:46.206385    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:51 no-preload-211859 kubelet[3861]: E0108 21:39:51.207349    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:56 no-preload-211859 kubelet[3861]: E0108 21:39:56.208887    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:01 no-preload-211859 kubelet[3861]: E0108 21:40:01.209938    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:06 no-preload-211859 kubelet[3861]: E0108 21:40:06.211074    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:11 no-preload-211859 kubelet[3861]: E0108 21:40:11.212394    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:16 no-preload-211859 kubelet[3861]: E0108 21:40:16.213579    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:21 no-preload-211859 kubelet[3861]: E0108 21:40:21.214661    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:26 no-preload-211859 kubelet[3861]: E0108 21:40:26.215564    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:31 no-preload-211859 kubelet[3861]: E0108 21:40:31.217187    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:36 no-preload-211859 kubelet[3861]: E0108 21:40:36.218263    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-211859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-211859 describe pod coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-211859 describe pod coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc: exit status 1 (69.773811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-vph2s" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-f6pc8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5949f5c576-6cctw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-f87d45d87-z6czc" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-211859 describe pod coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (534.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (535.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-211952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:32:54.041730   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:32:57.125871   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:33:22.256010   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:33:36.691290   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:34:02.345693   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:35:15.379010   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:35:50.301528   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:35:56.111650   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:36:59.210648   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:37:39.301126   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:37:40.169269   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:37:54.041728   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:37:57.125498   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:38:36.690949   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:39:17.086208   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:39:58.424348   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:39:59.734464   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:40:15.378463   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-211952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: exit status 80 (8m53.040292797s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-diff-port-211952" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image k8s.gcr.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:40.233285  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:42.234025  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:44.733707  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:47.232740  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:49.233976  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.733761  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:54.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:56.233841  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:58.733149  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:01.233702  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:03.233901  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:05.733569  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:08.233143  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:10.234013  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:12.733801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:15.233487  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:17.233814  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:19.233917  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:21.234234  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:23.732866  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:25.733792  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:27.734348  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:30.233612  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:32.233852  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:34.233919  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:36.733239  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:38.733765  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.233693  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.235775  282279 node_ready.go:38] duration metric: took 4m0.009149141s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:41:41.238174  282279 out.go:177] 
	W0108 21:41:41.239722  282279 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:41.239744  282279 out.go:239] * 
	* 
	W0108 21:41:41.240644  282279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:41.242421  282279 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-211952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-211952
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-211952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a",
	        "Created": "2023-01-08T21:20:01.150415833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282587,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:32:48.927228025Z",
	            "FinishedAt": "2023-01-08T21:32:47.253802017Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hostname",
	        "HostsPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hosts",
	        "LogPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a-json.log",
	        "Name": "/default-k8s-diff-port-211952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-211952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-211952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-211952",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-211952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-211952",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53e7e605d95635360fe097ddbfb4741ba8863864c9efdba4f96c7beabd6b2a3d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/53e7e605d956",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-211952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "553ec1d733bb",
	                        "default-k8s-diff-port-211952"
	                    ],
	                    "NetworkID": "dac77270e17703c586bb819b54d2f7262cc084b9a2efd9432712b1970a60294f",
	                    "EndpointID": "c6be5b4f6a510a10d7efb0fabb1b87fa86a3d15a8ac3c847110291d9b95f085b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-211952 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-211952           | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:32:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:40.233285  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:42.234025  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:40.300917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:42.301122  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.301723  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.733707  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:47.232740  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:46.801299  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:48.801395  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:49.233976  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.733761  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.301336  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:53.301705  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:54.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:56.233841  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:55.801251  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.301027  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.733149  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:01.233702  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:03.233901  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:00.301463  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:02.801220  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:05.733569  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:08.233143  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:04.801563  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:07.301530  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:08.802728  274657 node_ready.go:38] duration metric: took 4m0.007692604s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:41:08.805120  274657 out.go:177] 
	W0108 21:41:08.806709  274657 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:08.806733  274657 out.go:239] * 
	W0108 21:41:08.807656  274657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:08.809434  274657 out.go:177] 
	I0108 21:41:10.234013  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:12.733801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:15.233487  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:17.233814  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:19.233917  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:21.234234  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:23.732866  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:25.733792  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:27.734348  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:30.233612  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:32.233852  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:34.233919  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:36.733239  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:38.733765  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.233693  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.235775  282279 node_ready.go:38] duration metric: took 4m0.009149141s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:41:41.238174  282279 out.go:177] 
	W0108 21:41:41.239722  282279 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:41.239744  282279 out.go:239] * 
	W0108 21:41:41.240644  282279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:41.242421  282279 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	c5e714c6bc655       d6e3e26021b60       About a minute ago   Running             kindnet-cni               1                   b1608797a9dec
	ab48a6e41abea       d6e3e26021b60       4 minutes ago        Exited              kindnet-cni               0                   b1608797a9dec
	7e39d325fcec3       beaaf00edd38a       4 minutes ago        Running             kube-proxy                0                   d34fff239d9ab
	48fea364952d6       0346dbd74bcb9       4 minutes ago        Running             kube-apiserver            2                   626297ce42f86
	abdda2bcae93a       6d23ec0e8b87e       4 minutes ago        Running             kube-scheduler            2                   666f3069f4728
	e3c428ddf8ccc       6039992312758       4 minutes ago        Running             kube-controller-manager   2                   d6ec92c293591
	b4a61910cd1f4       a8a176a5d5d69       4 minutes ago        Running             etcd                      2                   2f2f9f37ad42e
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:32:49 UTC, end at Sun 2023-01-08 21:41:42 UTC. --
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.339302549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.339312970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.339613444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d34fff239d9ab046357913d96720b8d85421ebd6d5fbb383028bb387029bcee2 pid=4316 runtime=io.containerd.runc.v2
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.712909484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plrbr,Uid:2fbc5e9c-155c-4e49-bdd1-c454329ba6cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d34fff239d9ab046357913d96720b8d85421ebd6d5fbb383028bb387029bcee2\""
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.717182117Z" level=info msg="CreateContainer within sandbox \"d34fff239d9ab046357913d96720b8d85421ebd6d5fbb383028bb387029bcee2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.813223865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-8s5wp,Uid:45060c63-d2ae-429a-b95f-cbbac924d3a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\""
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.817015731Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.819772680Z" level=info msg="CreateContainer within sandbox \"d34fff239d9ab046357913d96720b8d85421ebd6d5fbb383028bb387029bcee2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e39d325fcec3c380ec395f031e958aba04ee24a6c69bb6f1a8b7b45ee7def8a\""
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.820989598Z" level=info msg="StartContainer for \"7e39d325fcec3c380ec395f031e958aba04ee24a6c69bb6f1a8b7b45ee7def8a\""
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.838038850Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1\""
	Jan 08 21:37:41 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:41.839039173Z" level=info msg="StartContainer for \"ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1\""
	Jan 08 21:37:42 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:42.138183129Z" level=info msg="StartContainer for \"7e39d325fcec3c380ec395f031e958aba04ee24a6c69bb6f1a8b7b45ee7def8a\" returns successfully"
	Jan 08 21:37:42 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:37:42.220638070Z" level=info msg="StartContainer for \"ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1\" returns successfully"
	Jan 08 21:38:26 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:38:26.834372660Z" level=error msg="ContainerStatus for \"6cbc8ffd048e14d7fe1838b69c7e93fc78c1d7dd73cb164e926c6b5bcb166c35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6cbc8ffd048e14d7fe1838b69c7e93fc78c1d7dd73cb164e926c6b5bcb166c35\": not found"
	Jan 08 21:38:26 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:38:26.834891459Z" level=error msg="ContainerStatus for \"f07a4ddc39913f7397899da6b88a6cc2d3dc305c72504d02789b7d8318e83bed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f07a4ddc39913f7397899da6b88a6cc2d3dc305c72504d02789b7d8318e83bed\": not found"
	Jan 08 21:38:26 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:38:26.835358233Z" level=error msg="ContainerStatus for \"72b66a5fbc29a780cf9d12666cbea9d7995fca5c0385b077fdfda08e85b1f9ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72b66a5fbc29a780cf9d12666cbea9d7995fca5c0385b077fdfda08e85b1f9ad\": not found"
	Jan 08 21:38:26 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:38:26.835793003Z" level=error msg="ContainerStatus for \"7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc\": not found"
	Jan 08 21:40:22 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:22.759421211Z" level=info msg="shim disconnected" id=ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1
	Jan 08 21:40:22 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:22.759528941Z" level=warning msg="cleaning up after shim disconnected" id=ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1 namespace=k8s.io
	Jan 08 21:40:22 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:22.759548116Z" level=info msg="cleaning up dead shim"
	Jan 08 21:40:22 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:22.768513485Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:40:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4797 runtime=io.containerd.runc.v2\n"
	Jan 08 21:40:23 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:23.303543646Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Jan 08 21:40:23 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:23.316991856Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"c5e714c6bc655a1f19adbc135b885bb98afbe98fa6867b6a63c0de732f8effaf\""
	Jan 08 21:40:23 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:23.317579679Z" level=info msg="StartContainer for \"c5e714c6bc655a1f19adbc135b885bb98afbe98fa6867b6a63c0de732f8effaf\""
	Jan 08 21:40:23 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:40:23.426099944Z" level=info msg="StartContainer for \"c5e714c6bc655a1f19adbc135b885bb98afbe98fa6867b6a63c0de732f8effaf\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-211952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-211952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=default-k8s-diff-port-211952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:37:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-211952
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:37:36 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:37:36 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:37:36 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:37:36 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-diff-port-211952
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                fe5ecc0a-a17f-4998-8022-5b0438ac303f
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-211952                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-8s5wp                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-diff-port-211952             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-211952    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-proxy-plrbr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-diff-port-211952             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m16s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s   node-controller  Node default-k8s-diff-port-211952 event: Registered Node default-k8s-diff-port-211952 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [b4a61910cd1f4724239205e8f7baa67961a32f0b087a58f553451cf3eb6d76e9] <==
	* {"level":"info","ts":"2023-01-08T21:37:21.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-01-08T21:37:21.039Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.630Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-diff-port-211952 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:37:21.632Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-01-08T21:37:21.632Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:41:42 up  1:24,  0 users,  load average: 0.38, 0.35, 0.76
	Linux default-k8s-diff-port-211952 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [48fea364952d660231e61de96e258c90261216a9125cef423faa8556528853bf] <==
	* I0108 21:37:40.377284       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0108 21:37:42.642570       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.109.112.65]
	I0108 21:37:43.454080       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.225.84]
	I0108 21:37:43.512227       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.78.28]
	W0108 21:37:43.524353       1 handler_proxy.go:105] no RequestInfo found in the context
	W0108 21:37:43.524377       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:37:43.524391       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:37:43.524399       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0108 21:37:43.524440       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:37:43.525443       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:38:43.524656       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:38:43.524690       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:38:43.524698       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:38:43.525823       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:38:43.525882       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:38:43.525894       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:40:43.525443       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:40:43.525484       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:40:43.525490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:40:43.526605       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:40:43.526692       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:40:43.526707       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e3c428ddf8cccb4e892d7504dcff11cf2d810ed67e3e1ddfc5fe90992e47e910] <==
	* I0108 21:37:43.371102       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5949f5c576-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0108 21:37:43.372456       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:43.372499       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0108 21:37:43.412641       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" failed with pods "dashboard-metrics-scraper-5949f5c576-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0108 21:37:43.412643       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0108 21:37:43.412651       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5949f5c576-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0108 21:37:43.412716       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0108 21:37:43.437113       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-f87d45d87-bnlzk"
	I0108 21:37:43.461408       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5949f5c576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5949f5c576-b87fb"
	E0108 21:38:10.481230       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:10.902831       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:40.487112       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:40.912682       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:10.493021       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:10.925536       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:40.499510       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:40.935786       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:10.506923       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:10.947738       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:40.514168       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:40.958249       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:41:10.521770       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:41:10.971027       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:41:40.526795       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:41:40.982801       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [7e39d325fcec3c380ec395f031e958aba04ee24a6c69bb6f1a8b7b45ee7def8a] <==
	* I0108 21:37:42.315555       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0108 21:37:42.315778       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0108 21:37:42.315808       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:37:42.410324       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:37:42.410383       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:37:42.410396       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:37:42.410417       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:37:42.410457       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:37:42.410966       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:37:42.411220       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:37:42.411233       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:37:42.412192       1 config.go:444] "Starting node config controller"
	I0108 21:37:42.412209       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:37:42.412114       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:37:42.412238       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:37:42.412476       1 config.go:317] "Starting service config controller"
	I0108 21:37:42.412506       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:37:42.512504       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:37:42.512803       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:37:42.513016       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [abdda2bcae93a4c9457bd4b491d97e1ceac603d4e346988062f43313f78e961c] <==
	* W0108 21:37:24.138917       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:37:24.138928       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:37:24.212018       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:37:24.212241       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:37:24.212341       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:37:24.212407       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:37:24.213805       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:37:24.213980       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:37:24.213838       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:37:24.214164       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:37:24.214612       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:37:24.214823       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:37:24.985444       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:37:24.985477       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:37:25.059921       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:37:25.059953       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:37:25.111960       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:37:25.111992       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:37:25.121962       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:37:25.121998       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:37:25.194792       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:37:25.194839       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:37:25.256188       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:37:25.256225       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0108 21:37:25.735198       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:32:49 UTC, end at Sun 2023-01-08 21:41:42 UTC. --
	Jan 08 21:39:47 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:39:47.140908    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:52 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:39:52.141659    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:39:57 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:39:57.143379    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:02 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:02.144992    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:07 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:07.145879    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:12 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:12.147327    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:17 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:17.148395    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:22 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:22.149686    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:23 default-k8s-diff-port-211952 kubelet[3859]: I0108 21:40:23.301206    3859 scope.go:115] "RemoveContainer" containerID="ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1"
	Jan 08 21:40:27 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:27.150497    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:32 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:32.151935    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:37 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:37.153025    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:42 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:42.154707    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:47 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:47.155704    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:52 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:52.157124    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:40:57 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:40:57.158449    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:02 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:02.159107    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:07 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:07.159921    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:12 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:12.161528    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:17 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:17.162255    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:22 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:22.163762    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:27 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:27.164973    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:32 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:32.165881    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:37 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:37.166657    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:41:42 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:41:42.168210    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk: exit status 1 (62.826972ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-vl6gh" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-mctg7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5949f5c576-b87fb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-f87d45d87-bnlzk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (535.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-z6czc" [596767b1-077a-40da-8e3e-9ba50e8bcd61] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0108 21:40:50.301006   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:40:56.111917   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-01-08 21:49:41.761356128 +0000 UTC m=+4943.904528409
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context no-preload-211859 describe po kubernetes-dashboard-f87d45d87-z6czc -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context no-preload-211859 describe po kubernetes-dashboard-f87d45d87-z6czc -n kubernetes-dashboard: context deadline exceeded (1.21µs)
start_stop_delete_test.go:274: kubectl --context no-preload-211859 describe po kubernetes-dashboard-f87d45d87-z6czc -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context no-preload-211859 logs kubernetes-dashboard-f87d45d87-z6czc -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context no-preload-211859 logs kubernetes-dashboard-f87d45d87-z6czc -n kubernetes-dashboard: context deadline exceeded (192ns)
start_stop_delete_test.go:274: kubectl --context no-preload-211859 logs kubernetes-dashboard-f87d45d87-z6czc -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-211859
helpers_test.go:235: (dbg) docker inspect no-preload-211859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65",
	        "Created": "2023-01-08T21:19:00.370984432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278593,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:31:48.048620229Z",
	            "FinishedAt": "2023-01-08T21:31:46.405509925Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/hosts",
	        "LogPath": "/var/lib/docker/containers/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65/23cabd631389ae96c1b7c008df6a8398933f940af86d5b001e0dc6f75a0cee65-json.log",
	        "Name": "/no-preload-211859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-211859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-211859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8a04bb249ea2676be858c7281db2547d3e1257ed49d4831ca2cc831070d676/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-211859",
	                "Source": "/var/lib/docker/volumes/no-preload-211859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-211859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-211859",
	                "name.minikube.sigs.k8s.io": "no-preload-211859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9cfd9ecce7176b07f9c74477aa29aa9c95c26877e9d01e814ddd93bb6301c38",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9cfd9ecce71",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-211859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "23cabd631389",
	                        "no-preload-211859"
	                    ],
	                    "NetworkID": "f6ac14d41355072c0829af36f4aed661fe422e2af93237ea348f6b100ade02e6",
	                    "EndpointID": "37d4278be35398ae25b032f4d4fcc8f365aa4610b071008ea955f6f3bc3face6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-211859 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-211952           | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:32:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:40.233285  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:42.234025  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:40.300917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:42.301122  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.301723  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.733707  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:47.232740  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:46.801299  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:48.801395  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:49.233976  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.733761  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.301336  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:53.301705  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:54.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:56.233841  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:55.801251  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.301027  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.733149  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:01.233702  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:03.233901  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:00.301463  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:02.801220  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:05.733569  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:08.233143  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:04.801563  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:07.301530  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:08.802728  274657 node_ready.go:38] duration metric: took 4m0.007692604s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:41:08.805120  274657 out.go:177] 
	W0108 21:41:08.806709  274657 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:08.806733  274657 out.go:239] * 
	W0108 21:41:08.807656  274657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:08.809434  274657 out.go:177] 
	I0108 21:41:10.234013  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:12.733801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:15.233487  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:17.233814  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:19.233917  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:21.234234  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:23.732866  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:25.733792  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:27.734348  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:30.233612  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:32.233852  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:34.233919  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:36.733239  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:38.733765  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.233693  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.235775  282279 node_ready.go:38] duration metric: took 4m0.009149141s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:41:41.238174  282279 out.go:177] 
	W0108 21:41:41.239722  282279 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:41.239744  282279 out.go:239] * 
	W0108 21:41:41.240644  282279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:41.242421  282279 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	b225423b76160       d6e3e26021b60       About a minute ago   Running             kindnet-cni               4                   f6461e142e259
	042459819e07d       d6e3e26021b60       4 minutes ago        Exited              kindnet-cni               3                   f6461e142e259
	36aefc2fd3ef3       beaaf00edd38a       13 minutes ago       Running             kube-proxy                0                   eec8859c8e251
	77cf9a5ca1193       6d23ec0e8b87e       13 minutes ago       Running             kube-scheduler            2                   f4377fb005063
	7f62da141fb9c       0346dbd74bcb9       13 minutes ago       Running             kube-apiserver            2                   788e0349fea64
	c2c7203594cf0       6039992312758       13 minutes ago       Running             kube-controller-manager   2                   d7254b1559d0f
	a93b9d4e3ea9d       a8a176a5d5d69       13 minutes ago       Running             etcd                      2                   cc1481044f8a0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:31:48 UTC, end at Sun 2023-01-08 21:49:42 UTC. --
	Jan 08 21:42:01 no-preload-211859 containerd[386]: time="2023-01-08T21:42:01.685864868Z" level=info msg="RemoveContainer for \"155ea2a3a27d753ec61e5df41b05eb3841a45a2c438abdac464daa7b633c401f\" returns successfully"
	Jan 08 21:42:14 no-preload-211859 containerd[386]: time="2023-01-08T21:42:14.029134680Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:42:14 no-preload-211859 containerd[386]: time="2023-01-08T21:42:14.042583588Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8\""
	Jan 08 21:42:14 no-preload-211859 containerd[386]: time="2023-01-08T21:42:14.043128863Z" level=info msg="StartContainer for \"c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8\""
	Jan 08 21:42:14 no-preload-211859 containerd[386]: time="2023-01-08T21:42:14.213461868Z" level=info msg="StartContainer for \"c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8\" returns successfully"
	Jan 08 21:44:54 no-preload-211859 containerd[386]: time="2023-01-08T21:44:54.653720858Z" level=info msg="shim disconnected" id=c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8
	Jan 08 21:44:54 no-preload-211859 containerd[386]: time="2023-01-08T21:44:54.653786093Z" level=warning msg="cleaning up after shim disconnected" id=c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8 namespace=k8s.io
	Jan 08 21:44:54 no-preload-211859 containerd[386]: time="2023-01-08T21:44:54.653804022Z" level=info msg="cleaning up dead shim"
	Jan 08 21:44:54 no-preload-211859 containerd[386]: time="2023-01-08T21:44:54.662652543Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:44:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5235 runtime=io.containerd.runc.v2\n"
	Jan 08 21:44:54 no-preload-211859 containerd[386]: time="2023-01-08T21:44:54.983044988Z" level=info msg="RemoveContainer for \"645f5298262ba4f6af75a84462916118de4527f05ce2100ceabe82b72e1e8d1d\""
	Jan 08 21:44:54 no-preload-211859 containerd[386]: time="2023-01-08T21:44:54.988892740Z" level=info msg="RemoveContainer for \"645f5298262ba4f6af75a84462916118de4527f05ce2100ceabe82b72e1e8d1d\" returns successfully"
	Jan 08 21:45:18 no-preload-211859 containerd[386]: time="2023-01-08T21:45:18.028393557Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:45:18 no-preload-211859 containerd[386]: time="2023-01-08T21:45:18.042314291Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d\""
	Jan 08 21:45:18 no-preload-211859 containerd[386]: time="2023-01-08T21:45:18.042910198Z" level=info msg="StartContainer for \"042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d\""
	Jan 08 21:45:18 no-preload-211859 containerd[386]: time="2023-01-08T21:45:18.130742361Z" level=info msg="StartContainer for \"042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d\" returns successfully"
	Jan 08 21:47:58 no-preload-211859 containerd[386]: time="2023-01-08T21:47:58.555868702Z" level=info msg="shim disconnected" id=042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d
	Jan 08 21:47:58 no-preload-211859 containerd[386]: time="2023-01-08T21:47:58.555933834Z" level=warning msg="cleaning up after shim disconnected" id=042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d namespace=k8s.io
	Jan 08 21:47:58 no-preload-211859 containerd[386]: time="2023-01-08T21:47:58.555947779Z" level=info msg="cleaning up dead shim"
	Jan 08 21:47:58 no-preload-211859 containerd[386]: time="2023-01-08T21:47:58.564459757Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:47:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5349 runtime=io.containerd.runc.v2\n"
	Jan 08 21:47:59 no-preload-211859 containerd[386]: time="2023-01-08T21:47:59.307359056Z" level=info msg="RemoveContainer for \"c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8\""
	Jan 08 21:47:59 no-preload-211859 containerd[386]: time="2023-01-08T21:47:59.312768501Z" level=info msg="RemoveContainer for \"c3b02a8dc75029384501886e2277449dbc2931afc1ed0784e53300d3aedee7e8\" returns successfully"
	Jan 08 21:48:41 no-preload-211859 containerd[386]: time="2023-01-08T21:48:41.028736349Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jan 08 21:48:41 no-preload-211859 containerd[386]: time="2023-01-08T21:48:41.041011864Z" level=info msg="CreateContainer within sandbox \"f6461e142e2593609dafe00c763c83709c1ba57bf8a5e5c434ee754ca31d6f4b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"b225423b76160edf391185e6a15c23a10f15df4ac745c0d4ea67d906e86c0db8\""
	Jan 08 21:48:41 no-preload-211859 containerd[386]: time="2023-01-08T21:48:41.041427730Z" level=info msg="StartContainer for \"b225423b76160edf391185e6a15c23a10f15df4ac745c0d4ea67d906e86c0db8\""
	Jan 08 21:48:41 no-preload-211859 containerd[386]: time="2023-01-08T21:48:41.125818434Z" level=info msg="StartContainer for \"b225423b76160edf391185e6a15c23a10f15df4ac745c0d4ea67d906e86c0db8\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-211859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-211859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=no-preload-211859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:36:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-211859
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:49:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:46:48 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:46:48 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:46:48 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:46:48 +0000   Sun, 08 Jan 2023 21:36:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-211859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                1811e86e-6254-4928-9c37-fe78bdd2d83e
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-211859                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-4lwd7                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-no-preload-211859             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-211859    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-dw9j2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-211859             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x5 over 13m)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x5 over 13m)  kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-211859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-211859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-211859 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-211859 event: Registered Node no-preload-211859 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [a93b9d4e3ea9d7ddea017392f93e34c2efe6e6f80b028fff2eb8f2985504b8f1] <==
	* {"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:36:20.012Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-01-08T21:36:20.013Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-01-08T21:36:20.043Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-211859 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:36:20.044Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:36:20.045Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:36:20.045Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:36:20.046Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:36:20.046Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-01-08T21:46:20.456Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":536}
	{"level":"info","ts":"2023-01-08T21:46:20.457Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":536,"took":"531.505µs"}
	
	* 
	* ==> kernel <==
	*  21:49:42 up  1:32,  0 users,  load average: 0.40, 0.30, 0.56
	Linux no-preload-211859 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [7f62da141fb9c80fca27bdb124a1de86aea7fc525eac1babc12734cc16fe88b3] <==
	* W0108 21:44:23.841018       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:44:23.841078       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:44:23.841085       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:46:23.843767       1 handler_proxy.go:105] no RequestInfo found in the context
	W0108 21:46:23.843774       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:46:23.843810       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:46:23.843819       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0108 21:46:23.843881       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:46:23.845003       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:47:23.844829       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:47:23.844870       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:47:23.844878       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:47:23.845916       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:47:23.845963       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:47:23.845974       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:49:23.845319       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:49:23.845365       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:49:23.845372       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:49:23.846469       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:49:23.846535       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:49:23.846549       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [c2c7203594cf0ed2b2f7e9c27a5792035f061a09806bab1e72ef33029e1673f7] <==
	* W0108 21:43:38.879607       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:44:08.452186       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:44:08.889946       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:44:38.461512       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:44:38.899650       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:45:08.467841       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:45:08.912485       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:45:38.474127       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:45:38.925044       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:46:08.480322       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:46:08.935655       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:46:38.487042       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:46:38.945850       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:47:08.493925       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:47:08.956473       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:47:38.500704       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:47:38.967600       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:48:08.506431       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:48:08.978788       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:48:38.512682       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:48:38.989439       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:49:08.519358       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:49:09.001658       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:49:38.524792       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:49:39.013024       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [36aefc2fd3ef3cf6d2a078c5204b7e0485abb5fca01ffc7a772f74c95e24d1a7] <==
	* I0108 21:36:39.228890       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0108 21:36:39.228966       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0108 21:36:39.228995       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:36:39.315602       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:36:39.315644       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:36:39.315659       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:36:39.315684       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:36:39.315727       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:36:39.315913       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:36:39.316159       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:36:39.316177       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:39.318260       1 config.go:317] "Starting service config controller"
	I0108 21:36:39.318290       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:36:39.318319       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:36:39.318324       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:36:39.319273       1 config.go:444] "Starting node config controller"
	I0108 21:36:39.319285       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:36:39.421155       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:36:39.421225       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:36:39.421371       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [77cf9a5ca119376f6a2d79733c5dc309f991e166abd44a21369d5b7718807cdd] <==
	* E0108 21:36:22.925326       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:36:22.925310       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:36:22.925353       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.926289       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.925817       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:36:22.925982       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:36:22.926353       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:36:22.926060       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:36:22.926390       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:36:22.926102       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:22.926419       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:22.926193       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:36:22.926269       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.926448       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:36:22.926504       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:22.926529       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:36:23.930685       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:23.930729       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:36:24.002788       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:36:24.002824       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:36:24.003616       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:36:24.003647       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:36:24.036364       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:36:24.036402       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 21:36:25.821609       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:31:48 UTC, end at Sun 2023-01-08 21:49:43 UTC. --
	Jan 08 21:48:06 no-preload-211859 kubelet[3861]: E0108 21:48:06.325997    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:11 no-preload-211859 kubelet[3861]: E0108 21:48:11.326740    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:13 no-preload-211859 kubelet[3861]: I0108 21:48:13.026565    3861 scope.go:115] "RemoveContainer" containerID="042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d"
	Jan 08 21:48:13 no-preload-211859 kubelet[3861]: E0108 21:48:13.026902    3861 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-4lwd7_kube-system(3f1d12aa-f47d-4fcc-85fc-8c24cd90ed73)\"" pod="kube-system/kindnet-4lwd7" podUID=3f1d12aa-f47d-4fcc-85fc-8c24cd90ed73
	Jan 08 21:48:16 no-preload-211859 kubelet[3861]: E0108 21:48:16.327532    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:21 no-preload-211859 kubelet[3861]: E0108 21:48:21.328981    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:26 no-preload-211859 kubelet[3861]: I0108 21:48:26.026801    3861 scope.go:115] "RemoveContainer" containerID="042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d"
	Jan 08 21:48:26 no-preload-211859 kubelet[3861]: E0108 21:48:26.027140    3861 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-4lwd7_kube-system(3f1d12aa-f47d-4fcc-85fc-8c24cd90ed73)\"" pod="kube-system/kindnet-4lwd7" podUID=3f1d12aa-f47d-4fcc-85fc-8c24cd90ed73
	Jan 08 21:48:26 no-preload-211859 kubelet[3861]: E0108 21:48:26.329859    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:31 no-preload-211859 kubelet[3861]: E0108 21:48:31.331164    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:36 no-preload-211859 kubelet[3861]: E0108 21:48:36.332869    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:41 no-preload-211859 kubelet[3861]: I0108 21:48:41.026307    3861 scope.go:115] "RemoveContainer" containerID="042459819e07dd58ffcb38f75aaab620eabc61896898ef589b35fe8c05790a3d"
	Jan 08 21:48:41 no-preload-211859 kubelet[3861]: E0108 21:48:41.334588    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:46 no-preload-211859 kubelet[3861]: E0108 21:48:46.336072    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:51 no-preload-211859 kubelet[3861]: E0108 21:48:51.337032    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:48:56 no-preload-211859 kubelet[3861]: E0108 21:48:56.338004    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:01 no-preload-211859 kubelet[3861]: E0108 21:49:01.338875    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:06 no-preload-211859 kubelet[3861]: E0108 21:49:06.340056    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:11 no-preload-211859 kubelet[3861]: E0108 21:49:11.341197    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:16 no-preload-211859 kubelet[3861]: E0108 21:49:16.342076    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:21 no-preload-211859 kubelet[3861]: E0108 21:49:21.343812    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:26 no-preload-211859 kubelet[3861]: E0108 21:49:26.345449    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:31 no-preload-211859 kubelet[3861]: E0108 21:49:31.346385    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:36 no-preload-211859 kubelet[3861]: E0108 21:49:36.347351    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:41 no-preload-211859 kubelet[3861]: E0108 21:49:41.348701    3861 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-211859 -n no-preload-211859
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-211859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-211859 describe pod coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-211859 describe pod coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc: exit status 1 (67.421301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-vph2s" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-f6pc8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5949f5c576-6cctw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-f87d45d87-z6czc" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-211859 describe pod coredns-565d847f94-vph2s metrics-server-5c8fd5cf8-f6pc8 storage-provisioner dashboard-metrics-scraper-5949f5c576-6cctw kubernetes-dashboard-f87d45d87-z6czc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-t6nsd" [7e9f5e0f-3d62-4617-ad96-12d1d568650f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 21:48:36.690998   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 21:48:53.344856   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 21:50:02.257166   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-01-08 21:50:11.477451231 +0000 UTC m=+4973.620623519
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context old-k8s-version-211828 describe po kubernetes-dashboard-84b68f675b-t6nsd -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 describe po kubernetes-dashboard-84b68f675b-t6nsd -n kubernetes-dashboard: context deadline exceeded (1.49µs)
start_stop_delete_test.go:274: kubectl --context old-k8s-version-211828 describe po kubernetes-dashboard-84b68f675b-t6nsd -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context old-k8s-version-211828 logs kubernetes-dashboard-84b68f675b-t6nsd -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 logs kubernetes-dashboard-84b68f675b-t6nsd -n kubernetes-dashboard: context deadline exceeded (98ns)
start_stop_delete_test.go:274: kubectl --context old-k8s-version-211828 logs kubernetes-dashboard-84b68f675b-t6nsd -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-211828
helpers_test.go:235: (dbg) docker inspect old-k8s-version-211828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9",
	        "Created": "2023-01-08T21:18:34.933200191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:31:15.443918902Z",
	            "FinishedAt": "2023-01-08T21:31:13.76532174Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/hosts",
	        "LogPath": "/var/lib/docker/containers/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9/f66150df9bfbf1ab10c3aba266cbbbbea44cd78fa850fb565ba2519ca7f6b7f9-json.log",
	        "Name": "/old-k8s-version-211828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-211828:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-211828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f94fc9304e6856dde90c181cc6ec06aa8d5e14a5211fd4c6e4aa5513d772fd0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-211828",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-211828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-211828",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-211828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "309018aa666998324b7412f25b087ca70d071f695cfc1d9a8c847612c87e3f79",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/309018aa6669",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-211828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f66150df9bfb",
	                        "old-k8s-version-211828"
	                    ],
	                    "NetworkID": "e48a739a7de53b0a2a21ddeaf3e573efe5bbf8c41c6a15cbe1e7c39d0f359d82",
	                    "EndpointID": "eade8242d93b9948df14457042458d9f5c41719567074de6be7d51293c5d2da9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-211828 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-211952           | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:49 UTC | 08 Jan 23 21:49 UTC |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:32:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:40.233285  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:42.234025  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:40.300917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:42.301122  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.301723  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.733707  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:47.232740  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:46.801299  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:48.801395  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:49.233976  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.733761  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.301336  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:53.301705  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:54.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:56.233841  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:55.801251  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.301027  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.733149  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:01.233702  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:03.233901  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:00.301463  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:02.801220  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:05.733569  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:08.233143  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:04.801563  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:07.301530  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:08.802728  274657 node_ready.go:38] duration metric: took 4m0.007692604s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:41:08.805120  274657 out.go:177] 
	W0108 21:41:08.806709  274657 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:08.806733  274657 out.go:239] * 
	W0108 21:41:08.807656  274657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:08.809434  274657 out.go:177] 
	I0108 21:41:10.234013  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:12.733801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:15.233487  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:17.233814  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:19.233917  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:21.234234  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:23.732866  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:25.733792  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:27.734348  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:30.233612  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:32.233852  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:34.233919  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:36.733239  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:38.733765  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.233693  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.235775  282279 node_ready.go:38] duration metric: took 4m0.009149141s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:41:41.238174  282279 out.go:177] 
	W0108 21:41:41.239722  282279 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:41.239744  282279 out.go:239] * 
	W0108 21:41:41.240644  282279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:41.242421  282279 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b6402fbf24806       d6e3e26021b60       59 seconds ago      Running             kindnet-cni               4                   229cf0ebba830
	b7935ca79c79d       d6e3e26021b60       4 minutes ago       Exited              kindnet-cni               3                   229cf0ebba830
	0bb48abbf3066       c21b0c7400f98       13 minutes ago      Running             kube-proxy                0                   cfc2e9ff7b2fb
	9b0e57fd243d3       b2756210eeabf       13 minutes ago      Running             etcd                      0                   134c442360b3c
	7458febb17f62       06a629a7e51cd       13 minutes ago      Running             kube-controller-manager   0                   d0163a00edc6f
	216117bba57a4       b305571ca60a5       13 minutes ago      Running             kube-apiserver            0                   110f7899c876b
	34023d0c3e2fc       301ddc62b80b1       13 minutes ago      Running             kube-scheduler            0                   1c7d262754d7c
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:31:15 UTC, end at Sun 2023-01-08 21:50:12 UTC. --
	Jan 08 21:42:30 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:42:30.277939522Z" level=info msg="RemoveContainer for \"fabdc3aa883d84ea4981078e3a4b83d031b470cbf3a91dc5972dfa813c7277b1\" returns successfully"
	Jan 08 21:42:44 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:42:44.676425480Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:42:44 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:42:44.690552546Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4\""
	Jan 08 21:42:44 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:42:44.691041878Z" level=info msg="StartContainer for \"8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4\""
	Jan 08 21:42:44 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:42:44.826053166Z" level=info msg="StartContainer for \"8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4\" returns successfully"
	Jan 08 21:45:25 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:25.335566732Z" level=info msg="shim disconnected" id=8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4
	Jan 08 21:45:25 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:25.335644580Z" level=warning msg="cleaning up after shim disconnected" id=8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4 namespace=k8s.io
	Jan 08 21:45:25 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:25.335667787Z" level=info msg="cleaning up dead shim"
	Jan 08 21:45:25 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:25.345575884Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:45:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5847 runtime=io.containerd.runc.v2\n"
	Jan 08 21:45:25 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:25.528845360Z" level=info msg="RemoveContainer for \"862e5d558b0c11f05f02ef9fc1ef81f0678dc4af6cbd49747a0104f86b717fe4\""
	Jan 08 21:45:25 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:25.534252463Z" level=info msg="RemoveContainer for \"862e5d558b0c11f05f02ef9fc1ef81f0678dc4af6cbd49747a0104f86b717fe4\" returns successfully"
	Jan 08 21:45:50 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:50.676328671Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:45:50 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:50.689184776Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"b7935ca79c79da68573598044479d5486e3ee1a7a0e338cdb63b748a3bab745b\""
	Jan 08 21:45:50 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:50.689711628Z" level=info msg="StartContainer for \"b7935ca79c79da68573598044479d5486e3ee1a7a0e338cdb63b748a3bab745b\""
	Jan 08 21:45:50 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:45:50.814538179Z" level=info msg="StartContainer for \"b7935ca79c79da68573598044479d5486e3ee1a7a0e338cdb63b748a3bab745b\" returns successfully"
	Jan 08 21:48:31 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:48:31.358580840Z" level=info msg="shim disconnected" id=b7935ca79c79da68573598044479d5486e3ee1a7a0e338cdb63b748a3bab745b
	Jan 08 21:48:31 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:48:31.358646967Z" level=warning msg="cleaning up after shim disconnected" id=b7935ca79c79da68573598044479d5486e3ee1a7a0e338cdb63b748a3bab745b namespace=k8s.io
	Jan 08 21:48:31 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:48:31.358660104Z" level=info msg="cleaning up dead shim"
	Jan 08 21:48:31 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:48:31.367687469Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:48:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6336 runtime=io.containerd.runc.v2\n"
	Jan 08 21:48:31 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:48:31.794480973Z" level=info msg="RemoveContainer for \"8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4\""
	Jan 08 21:48:31 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:48:31.801290391Z" level=info msg="RemoveContainer for \"8c15540073e5f7680302f8dbe8f6bd138c8131828ef318f468fabc888ab30bc4\" returns successfully"
	Jan 08 21:49:12 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:49:12.676007884Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jan 08 21:49:12 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:49:12.688302720Z" level=info msg="CreateContainer within sandbox \"229cf0ebba830dd82a892eda3cb6a07896d0dea141cf1cb04d2832750302c340\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"b6402fbf24806941edeae9620336506e5492f68a013b317a9aafee63e0c5aa0e\""
	Jan 08 21:49:12 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:49:12.688794527Z" level=info msg="StartContainer for \"b6402fbf24806941edeae9620336506e5492f68a013b317a9aafee63e0c5aa0e\""
	Jan 08 21:49:12 old-k8s-version-211828 containerd[386]: time="2023-01-08T21:49:12.825914459Z" level=info msg="StartContainer for \"b6402fbf24806941edeae9620336506e5492f68a013b317a9aafee63e0c5aa0e\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-211828
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-211828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=old-k8s-version-211828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:36:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:49:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:49:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:49:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:49:48 +0000   Sun, 08 Jan 2023 21:36:45 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-211828
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32871748Ki
	 pods:               110
	System Info:
	 Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	 System UUID:                a9413ae7-d165-4b76-a22b-73b89e3e2d6a
	 Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	 Kernel Version:             5.15.0-1025-gcp
	 OS Image:                   Ubuntu 20.04.5 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.6.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-211828                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kindnet-vvlch                                     100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                kube-apiserver-old-k8s-version-211828             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-211828    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-wp9ct                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-211828             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 13m                kubelet, old-k8s-version-211828     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x9 over 13m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-211828     Node old-k8s-version-211828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet, old-k8s-version-211828     Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-211828  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [9b0e57fd243d3308c0449b3e7a14258d0eaf8edbdd267eb52c589c56f4035882] <==
	* 2023-01-08 21:36:44.428876 I | raft: ea7e25599daad906 became follower at term 0
	2023-01-08 21:36:44.428883 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-01-08 21:36:44.428886 I | raft: ea7e25599daad906 became follower at term 1
	2023-01-08 21:36:44.432856 W | auth: simple token is not cryptographically signed
	2023-01-08 21:36:44.435013 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-01-08 21:36:44.435460 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-01-08 21:36:44.435729 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:36:44.436906 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-01-08 21:36:44.437018 I | embed: listening for metrics on http://192.168.76.2:2381
	2023-01-08 21:36:44.437064 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-01-08 21:36:44.729178 I | raft: ea7e25599daad906 is starting a new election at term 1
	2023-01-08 21:36:44.729213 I | raft: ea7e25599daad906 became candidate at term 2
	2023-01-08 21:36:44.729231 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2023-01-08 21:36:44.729242 I | raft: ea7e25599daad906 became leader at term 2
	2023-01-08 21:36:44.729249 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2023-01-08 21:36:44.729515 I | etcdserver: setting up the initial cluster version to 3.3
	2023-01-08 21:36:44.730333 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-01-08 21:36:44.730373 I | etcdserver/api: enabled capabilities for version 3.3
	2023-01-08 21:36:44.730403 I | etcdserver: published {Name:old-k8s-version-211828 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2023-01-08 21:36:44.730413 I | embed: ready to serve client requests
	2023-01-08 21:36:44.730491 I | embed: ready to serve client requests
	2023-01-08 21:36:44.732637 I | embed: serving client requests on 127.0.0.1:2379
	2023-01-08 21:36:44.732810 I | embed: serving client requests on 192.168.76.2:2379
	2023-01-08 21:46:45.126791 I | mvcc: store.index: compact 577
	2023-01-08 21:46:45.127807 I | mvcc: finished scheduled compaction at 577 (took 680.026µs)
	
	* 
	* ==> kernel <==
	*  21:50:12 up  1:32,  0 users,  load average: 0.24, 0.27, 0.54
	Linux old-k8s-version-211828 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [216117bba57a438567af62bcbd3048094bf895ba1b4696bb7f6074dbbd62f7bb] <==
	* I0108 21:42:49.122224       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:42:49.122293       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:42:49.122328       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:42:49.122339       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:44:49.122554       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:44:49.122628       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:44:49.122696       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:44:49.122711       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:46:49.123358       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:46:49.123437       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:46:49.123541       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:46:49.123554       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:47:49.123726       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:47:49.123789       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:47:49.123854       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:47:49.123868       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:49:49.124063       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:49:49.124150       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:49:49.124235       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:49:49.124252       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7458febb17f623ccfde91c37aa81891ecb2f13e33f73798e42f7a457866c74d1] <==
	* E0108 21:43:42.008852       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:44:04.053829       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:44:12.260380       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:44:36.055213       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:44:42.511837       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:45:08.056775       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:45:12.763281       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:45:40.058432       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:45:43.014867       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:46:12.059941       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:46:13.266360       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0108 21:46:43.517588       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:46:44.061514       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:47:13.768837       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:47:16.062896       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:47:44.020394       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:47:48.064375       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:48:14.272109       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:48:20.065826       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:48:44.523537       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:48:52.067293       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:49:14.775054       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:49:24.068869       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:49:45.026660       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:49:56.070226       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0bb48abbf3066147aa288f5bdce84119ea74e56fe7ae2cf25ac4776d3cd01e62] <==
	* W0108 21:37:08.172189       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 21:37:08.178399       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0108 21:37:08.178430       1 server_others.go:149] Using iptables Proxier.
	I0108 21:37:08.178857       1 server.go:529] Version: v1.16.0
	I0108 21:37:08.179577       1 config.go:131] Starting endpoints config controller
	I0108 21:37:08.179599       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 21:37:08.179648       1 config.go:313] Starting service config controller
	I0108 21:37:08.179671       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 21:37:08.279769       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0108 21:37:08.279875       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [34023d0c3e2fc5bef5079f1ef1f9d579d8a9492830d49eeabfbacdba442fea14] <==
	* E0108 21:36:48.214857       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:36:48.214865       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:48.215011       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:36:48.215167       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:36:48.214865       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:48.218773       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:36:48.218868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:36:48.218906       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:36:48.218987       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:36:48.219062       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:48.219087       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:36:49.216126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:36:49.219650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:36:49.220671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:36:49.221806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:36:49.222909       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:49.224170       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:36:49.225242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:36:49.226271       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:36:49.227413       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:36:49.228824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:36:49.230035       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:37:07.733901       1 factory.go:585] pod is already present in the activeQ
	E0108 21:37:09.716350       1 factory.go:585] pod is already present in the activeQ
	E0108 21:37:11.314020       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:31:15 UTC, end at Sun 2023-01-08 21:50:12 UTC. --
	Jan 08 21:48:23 old-k8s-version-211828 kubelet[3000]: E0108 21:48:23.865535    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:28 old-k8s-version-211828 kubelet[3000]: E0108 21:48:28.866264    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:31 old-k8s-version-211828 kubelet[3000]: E0108 21:48:31.794411    3000 pod_workers.go:191] Error syncing pod 726e82bd-431c-44e0-9ba6-300e9f0997d0 ("kindnet-vvlch_kube-system(726e82bd-431c-44e0-9ba6-300e9f0997d0)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-vvlch_kube-system(726e82bd-431c-44e0-9ba6-300e9f0997d0)"
	Jan 08 21:48:33 old-k8s-version-211828 kubelet[3000]: E0108 21:48:33.867039    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:38 old-k8s-version-211828 kubelet[3000]: E0108 21:48:38.867825    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:43 old-k8s-version-211828 kubelet[3000]: E0108 21:48:43.674082    3000 pod_workers.go:191] Error syncing pod 726e82bd-431c-44e0-9ba6-300e9f0997d0 ("kindnet-vvlch_kube-system(726e82bd-431c-44e0-9ba6-300e9f0997d0)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-vvlch_kube-system(726e82bd-431c-44e0-9ba6-300e9f0997d0)"
	Jan 08 21:48:43 old-k8s-version-211828 kubelet[3000]: E0108 21:48:43.868495    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:48 old-k8s-version-211828 kubelet[3000]: E0108 21:48:48.869272    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:53 old-k8s-version-211828 kubelet[3000]: E0108 21:48:53.870021    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:48:57 old-k8s-version-211828 kubelet[3000]: E0108 21:48:57.673929    3000 pod_workers.go:191] Error syncing pod 726e82bd-431c-44e0-9ba6-300e9f0997d0 ("kindnet-vvlch_kube-system(726e82bd-431c-44e0-9ba6-300e9f0997d0)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-vvlch_kube-system(726e82bd-431c-44e0-9ba6-300e9f0997d0)"
	Jan 08 21:48:58 old-k8s-version-211828 kubelet[3000]: E0108 21:48:58.870740    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:03 old-k8s-version-211828 kubelet[3000]: E0108 21:49:03.871468    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:08 old-k8s-version-211828 kubelet[3000]: E0108 21:49:08.872172    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:13 old-k8s-version-211828 kubelet[3000]: E0108 21:49:13.872829    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:18 old-k8s-version-211828 kubelet[3000]: E0108 21:49:18.873459    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:23 old-k8s-version-211828 kubelet[3000]: E0108 21:49:23.874001    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:28 old-k8s-version-211828 kubelet[3000]: E0108 21:49:28.874819    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:33 old-k8s-version-211828 kubelet[3000]: E0108 21:49:33.875610    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:38 old-k8s-version-211828 kubelet[3000]: E0108 21:49:38.876410    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:43 old-k8s-version-211828 kubelet[3000]: E0108 21:49:43.877144    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:48 old-k8s-version-211828 kubelet[3000]: E0108 21:49:48.877986    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:53 old-k8s-version-211828 kubelet[3000]: E0108 21:49:53.878980    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:49:58 old-k8s-version-211828 kubelet[3000]: E0108 21:49:58.879775    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:50:03 old-k8s-version-211828 kubelet[3000]: E0108 21:50:03.880583    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Jan 08 21:50:08 old-k8s-version-211828 kubelet[3000]: E0108 21:50:08.881414    3000 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-211828 -n old-k8s-version-211828

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-211828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd: exit status 1 (65.311919ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-hk88c" not found
	Error from server (NotFound): pods "metrics-server-7958775c-7tzmp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6d58c4d9b5-hzcd6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-84b68f675b-t6nsd" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-211828 describe pod coredns-5644d7b6d9-hk88c metrics-server-7958775c-7tzmp storage-provisioner dashboard-metrics-scraper-6d58c4d9b5-hzcd6 kubernetes-dashboard-84b68f675b-t6nsd: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-bnlzk" [0e68b972-6990-4931-9ee0-0805daf80fbf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0108 21:41:59.211229   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:42:39.301791   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:42:54.041175   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:42:57.125087   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:43:36.691662   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
E0108 21:45:15.378640   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:45:50.301861   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:45:56.111567   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:46:59.210514   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:47:19.156678   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:47:39.301234   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:47:54.041376   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:47:57.125576   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 21:50:15.378362   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 21:50:42.346496   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-01-08 21:50:43.722841267 +0000 UTC m=+5005.866013559
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe po kubernetes-dashboard-f87d45d87-bnlzk -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 describe po kubernetes-dashboard-f87d45d87-bnlzk -n kubernetes-dashboard: context deadline exceeded (1.41µs)
start_stop_delete_test.go:274: kubectl --context default-k8s-diff-port-211952 describe po kubernetes-dashboard-f87d45d87-bnlzk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 logs kubernetes-dashboard-f87d45d87-bnlzk -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 logs kubernetes-dashboard-f87d45d87-bnlzk -n kubernetes-dashboard: context deadline exceeded (168ns)
start_stop_delete_test.go:274: kubectl --context default-k8s-diff-port-211952 logs kubernetes-dashboard-f87d45d87-bnlzk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-211952
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-211952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a",
	        "Created": "2023-01-08T21:20:01.150415833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282587,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:32:48.927228025Z",
	            "FinishedAt": "2023-01-08T21:32:47.253802017Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hostname",
	        "HostsPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/hosts",
	        "LogPath": "/var/lib/docker/containers/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a/553ec1d733bbeb6ffcc2ac0d1ced7151c61899c0985bf5342f52f9541b6c963a-json.log",
	        "Name": "/default-k8s-diff-port-211952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-211952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-211952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ba19c871602861b1c8ff1745322ef25dfc4f9aed7c8a83b8a68529ca18abc4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-211952",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-211952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-211952",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-211952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53e7e605d95635360fe097ddbfb4741ba8863864c9efdba4f96c7beabd6b2a3d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/53e7e605d956",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-211952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "553ec1d733bb",
	                        "default-k8s-diff-port-211952"
	                    ],
	                    "NetworkID": "dac77270e17703c586bb819b54d2f7262cc084b9a2efd9432712b1970a60294f",
	                    "EndpointID": "c6be5b4f6a510a10d7efb0fabb1b87fa86a3d15a8ac3c847110291d9b95f085b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-211952 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p embed-certs-211950                                      | embed-certs-211950           | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:26 UTC |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:26 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-212639                 | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-212639                      | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-212639 --memory=2200 --alsologtostderr       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-212639 sudo                                  | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:27 UTC | 08 Jan 23 21:27 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| delete  | -p newest-cni-212639                                       | newest-cni-212639            | jenkins | v1.28.0 | 08 Jan 23 21:28 UTC | 08 Jan 23 21:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-211828            | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-211828                 | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --kvm-network=default                                      |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                              |         |         |                     |                     |
	|         | --keep-context=false                                       |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-211859                 | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-211859                      | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC | 08 Jan 23 21:31 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:31 UTC |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr                                          |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-211952           | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC | 08 Jan 23 21:32 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-211952 | jenkins | v1.28.0 | 08 Jan 23 21:32 UTC |                     |
	|         | default-k8s-diff-port-211952                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-211859                                       | no-preload-211859            | jenkins | v1.28.0 | 08 Jan 23 21:49 UTC | 08 Jan 23 21:49 UTC |
	| delete  | -p old-k8s-version-211828                                  | old-k8s-version-211828       | jenkins | v1.28.0 | 08 Jan 23 21:50 UTC | 08 Jan 23 21:50 UTC |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 21:32:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:32:48.271671  282279 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:32:48.271850  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271858  282279 out.go:309] Setting ErrFile to fd 2...
	I0108 21:32:48.271863  282279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:32:48.271968  282279 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:32:48.272502  282279 out.go:303] Setting JSON to false
	I0108 21:32:48.273983  282279 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4518,"bootTime":1673209051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:32:48.274047  282279 start.go:135] virtualization: kvm guest
	I0108 21:32:48.276504  282279 out.go:177] * [default-k8s-diff-port-211952] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:32:48.277957  282279 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:32:48.277885  282279 notify.go:220] Checking for updates...
	I0108 21:32:48.279445  282279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:32:48.280736  282279 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:32:48.281949  282279 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:32:48.283257  282279 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:32:48.285163  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:48.285682  282279 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:32:48.316260  282279 docker.go:137] docker version: linux-20.10.22
	I0108 21:32:48.316350  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.413793  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.33729701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.413905  282279 docker.go:254] overlay module found
	I0108 21:32:48.417336  282279 out.go:177] * Using the docker driver based on existing profile
	I0108 21:32:48.418815  282279 start.go:294] selected driver: docker
	I0108 21:32:48.418829  282279 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.419310  282279 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:32:48.420906  282279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:32:48.521697  282279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 21:32:48.442146841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:32:48.522015  282279 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:32:48.522046  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:32:48.522065  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:32:48.522085  282279 start_flags.go:317] config:
	{Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:32:48.525023  282279 out.go:177] * Starting control plane node default-k8s-diff-port-211952 in cluster default-k8s-diff-port-211952
	I0108 21:32:48.526212  282279 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 21:32:48.527567  282279 out.go:177] * Pulling base image ...
	I0108 21:32:48.528812  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:32:48.528852  282279 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 21:32:48.528864  282279 cache.go:57] Caching tarball of preloaded images
	I0108 21:32:48.528902  282279 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 21:32:48.529139  282279 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:32:48.529153  282279 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I0108 21:32:48.529259  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.553994  282279 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 21:32:48.554019  282279 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 21:32:48.554037  282279 cache.go:193] Successfully downloaded all kic artifacts
	I0108 21:32:48.554075  282279 start.go:364] acquiring machines lock for default-k8s-diff-port-211952: {Name:mk8d09fc97f48331eb5f466fa120df2ec3fb1468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:32:48.554172  282279 start.go:368] acquired machines lock for "default-k8s-diff-port-211952" in 76.094µs
	I0108 21:32:48.554190  282279 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:32:48.554194  282279 fix.go:55] fixHost starting: 
	I0108 21:32:48.554387  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.579038  282279 fix.go:103] recreateIfNeeded on default-k8s-diff-port-211952: state=Stopped err=<nil>
	W0108 21:32:48.579064  282279 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 21:32:48.581203  282279 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-211952" ...
	I0108 21:32:45.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.706026  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:47.985367  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:50.484419  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:48.582569  282279 cli_runner.go:164] Run: docker start default-k8s-diff-port-211952
	I0108 21:32:48.934338  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:32:48.961177  282279 kic.go:415] container "default-k8s-diff-port-211952" state is running.
	I0108 21:32:48.961578  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:48.987154  282279 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/config.json ...
	I0108 21:32:48.987361  282279 machine.go:88] provisioning docker machine ...
	I0108 21:32:48.987381  282279 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-211952"
	I0108 21:32:48.987415  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:49.012441  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:49.012623  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:49.012640  282279 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-211952 && echo "default-k8s-diff-port-211952" | sudo tee /etc/hostname
	I0108 21:32:49.013295  282279 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56504->127.0.0.1:33057: read: connection reset by peer
	I0108 21:32:52.144323  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-211952
	
	I0108 21:32:52.144405  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.170929  282279 main.go:134] libmachine: Using SSH client type: native
	I0108 21:32:52.171092  282279 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0108 21:32:52.171123  282279 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-211952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-211952/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-211952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:32:52.287354  282279 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:32:52.287380  282279 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
	I0108 21:32:52.287397  282279 ubuntu.go:177] setting up certificates
	I0108 21:32:52.287404  282279 provision.go:83] configureAuth start
	I0108 21:32:52.287448  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.314640  282279 provision.go:138] copyHostCerts
	I0108 21:32:52.314692  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
	I0108 21:32:52.314701  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
	I0108 21:32:52.314776  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
	I0108 21:32:52.314872  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
	I0108 21:32:52.314881  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
	I0108 21:32:52.314915  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
	I0108 21:32:52.314981  282279 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
	I0108 21:32:52.314990  282279 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
	I0108 21:32:52.315028  282279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
	I0108 21:32:52.315090  282279 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-211952 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-211952]
	I0108 21:32:52.393623  282279 provision.go:172] copyRemoteCerts
	I0108 21:32:52.393682  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:32:52.393732  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.420616  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.506700  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:32:52.523990  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:32:52.541202  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:32:52.558612  282279 provision.go:86] duration metric: configureAuth took 271.196425ms
	I0108 21:32:52.558637  282279 ubuntu.go:193] setting minikube options for container-runtime
	I0108 21:32:52.558842  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:32:52.558859  282279 machine.go:91] provisioned docker machine in 3.571482619s
	I0108 21:32:52.558868  282279 start.go:300] post-start starting for "default-k8s-diff-port-211952" (driver="docker")
	I0108 21:32:52.558880  282279 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:32:52.558932  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:32:52.558975  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.584657  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.674855  282279 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:32:52.677553  282279 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 21:32:52.677581  282279 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 21:32:52.677595  282279 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 21:32:52.677605  282279 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 21:32:52.677620  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
	I0108 21:32:52.677677  282279 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
	I0108 21:32:52.677760  282279 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
	I0108 21:32:52.677874  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:32:52.684482  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:32:52.701176  282279 start.go:303] post-start completed in 142.293081ms
	I0108 21:32:52.701237  282279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:32:52.701267  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.726596  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.807879  282279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 21:32:52.811789  282279 fix.go:57] fixHost completed within 4.257589708s
	I0108 21:32:52.811814  282279 start.go:83] releasing machines lock for "default-k8s-diff-port-211952", held for 4.257630168s
	I0108 21:32:52.811884  282279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-211952
	I0108 21:32:52.836240  282279 ssh_runner.go:195] Run: cat /version.json
	I0108 21:32:52.836282  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.836337  282279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:32:52.836380  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:32:52.860700  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.862030  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:32:52.970766  282279 ssh_runner.go:195] Run: systemctl --version
	I0108 21:32:52.974774  282279 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:32:52.987146  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:32:52.996877  282279 docker.go:189] disabling docker service ...
	I0108 21:32:52.996922  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:32:53.006589  282279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:32:53.015555  282279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:32:53.091863  282279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:32:53.169568  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:32:53.178903  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:32:53.192470  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.200832  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.209487  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0108 21:32:53.217000  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0108 21:32:53.224820  282279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:32:53.231063  282279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:32:53.237511  282279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:32:50.205796  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.206925  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.705913  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:52.485249  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:54.984287  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:56.984440  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:53.318100  282279 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:32:53.382213  282279 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 21:32:53.382279  282279 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 21:32:53.386027  282279 start.go:472] Will wait 60s for crictl version
	I0108 21:32:53.386088  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:32:53.410740  282279 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-08T21:32:53Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0108 21:32:56.706559  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.206591  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:32:59.485251  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:01.985238  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.458457  282279 ssh_runner.go:195] Run: sudo crictl version
	I0108 21:33:04.481958  282279 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.10
	RuntimeApiVersion:  v1alpha2
	I0108 21:33:04.482015  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.505934  282279 ssh_runner.go:195] Run: containerd --version
	I0108 21:33:04.531417  282279 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.10 ...
	I0108 21:33:01.206633  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:03.705866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.484384  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:06.484587  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:04.533192  282279 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-211952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 21:33:04.556070  282279 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0108 21:33:04.559379  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.568499  282279 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 21:33:04.568548  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.591581  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.591606  282279 containerd.go:467] Images already preloaded, skipping extraction
	I0108 21:33:04.591658  282279 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:33:04.614523  282279 containerd.go:553] all images are preloaded for containerd runtime.
	I0108 21:33:04.614545  282279 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:33:04.614587  282279 ssh_runner.go:195] Run: sudo crictl info
	I0108 21:33:04.638172  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:04.638197  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:04.638209  282279 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:33:04.638221  282279 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-211952 NodeName:default-k8s-diff-port-211952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 21:33:04.638396  282279 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-211952"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:33:04.638498  282279 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-211952 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:33:04.638546  282279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 21:33:04.645671  282279 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:33:04.645725  282279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:33:04.652367  282279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I0108 21:33:04.664767  282279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:33:04.676853  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0108 21:33:04.689096  282279 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 21:33:04.691974  282279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:33:04.700883  282279 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952 for IP: 192.168.67.2
	I0108 21:33:04.700988  282279 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
	I0108 21:33:04.701028  282279 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
	I0108 21:33:04.701091  282279 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/client.key
	I0108 21:33:04.701143  282279 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key.c7fa3a9e
	I0108 21:33:04.701174  282279 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key
	I0108 21:33:04.701257  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
	W0108 21:33:04.701282  282279 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
	I0108 21:33:04.701292  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:33:04.701314  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:33:04.701334  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:33:04.701353  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
	I0108 21:33:04.701392  282279 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
	I0108 21:33:04.701980  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:33:04.719063  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:33:04.735492  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:33:04.752219  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/default-k8s-diff-port-211952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:33:04.769562  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:33:04.785821  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:33:04.802771  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:33:04.820712  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:33:04.838855  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:33:04.855960  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
	I0108 21:33:04.872964  282279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
	I0108 21:33:04.890046  282279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:33:04.902625  282279 ssh_runner.go:195] Run: openssl version
	I0108 21:33:04.907630  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:33:04.914856  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.917989  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.918039  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:33:04.922582  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:33:04.929304  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
	I0108 21:33:04.936712  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939656  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:41 /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.939705  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
	I0108 21:33:04.944460  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
	I0108 21:33:04.951168  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
	I0108 21:33:04.958399  282279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961446  282279 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:41 /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.961485  282279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
	I0108 21:33:04.966099  282279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:33:04.973053  282279 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-211952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-211952 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 21:33:04.973140  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 21:33:04.973193  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:04.997395  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:04.997418  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:04.997424  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:04.997430  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:04.997436  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:04.997442  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:04.997448  282279 cri.go:87] found id: ""
	I0108 21:33:04.997486  282279 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0108 21:33:05.008860  282279 cri.go:114] JSON = null
	W0108 21:33:05.008911  282279 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0108 21:33:05.008979  282279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:33:05.015919  282279 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 21:33:05.015939  282279 kubeadm.go:627] restartCluster start
	I0108 21:33:05.015976  282279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:33:05.022384  282279 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.023096  282279 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-211952" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:33:05.023497  282279 kubeconfig.go:146] "default-k8s-diff-port-211952" context is missing from /home/jenkins/minikube-integration/15565-3617/kubeconfig - will repair!
	I0108 21:33:05.024165  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:05.025421  282279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:33:05.032110  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.032154  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.039769  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.240114  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.240215  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.248661  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.439925  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.440040  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.448824  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.640029  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.640100  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.648577  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:05.839823  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:05.839949  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:05.848450  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.040650  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.040716  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.049118  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.240431  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.240537  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.249216  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.440559  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.440631  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.449237  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.640348  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.640440  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.648807  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:06.840116  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:06.840207  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:06.848729  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.039918  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.039988  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.048542  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.240718  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.240800  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.249405  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.440610  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.440687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.449502  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.640620  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.640687  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.649358  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:07.840624  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:07.840691  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:07.849725  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.039967  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.040051  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.048653  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.048676  282279 api_server.go:165] Checking apiserver status ...
	I0108 21:33:08.048717  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:33:08.056766  282279 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.056803  282279 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 21:33:08.056811  282279 kubeadm.go:1114] stopping kube-system containers ...
	I0108 21:33:08.056824  282279 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0108 21:33:08.056880  282279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:33:08.081283  282279 cri.go:87] found id: "852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f"
	I0108 21:33:08.081308  282279 cri.go:87] found id: "7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc"
	I0108 21:33:08.081315  282279 cri.go:87] found id: "26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225"
	I0108 21:33:08.081322  282279 cri.go:87] found id: "581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d"
	I0108 21:33:08.081330  282279 cri.go:87] found id: "e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa"
	I0108 21:33:08.081340  282279 cri.go:87] found id: "b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d"
	I0108 21:33:08.081349  282279 cri.go:87] found id: ""
	I0108 21:33:08.081357  282279 cri.go:232] Stopping containers: [852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d]
	I0108 21:33:08.081407  282279 ssh_runner.go:195] Run: which crictl
	I0108 21:33:08.084402  282279 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 852d56656c5856dc9466d5958d922b1fb183f2aec08614f985fe416c64a9e39f 7bd93fc5f6581fc645b55912b83f2574de351fc529386432414951aa2daf2fcc 26d1b1e130787a68406a03b8d80596c8831a47a2e1243e9600f2064c39821225 581d92e60716567c7004011a74cded892f1fc5d504fca63a302cf93f78c3ee0d e519152964881d01a5503fb8dde528337d33d35a331b776e6263034fcddf9faa b7739474207cefbf8cd49e1119f5845c03776a5769273cd775abfa0b72ed1f1d
	I0108 21:33:08.110089  282279 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:33:08.120362  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:08.127839  282279 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:20 /etc/kubernetes/scheduler.conf
	
	I0108 21:33:08.127889  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 21:33:08.134530  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 21:33:08.141215  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.147849  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.147901  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 21:33:08.154323  282279 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 21:33:08.161096  282279 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:33:08.161153  282279 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 21:33:08.167783  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174752  282279 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:08.174774  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.220042  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:05.706546  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:07.706879  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.484783  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:10.985364  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:08.629802  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.761310  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.827730  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:08.933064  282279 api_server.go:51] waiting for apiserver process to appear ...
	I0108 21:33:08.933117  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.442969  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:09.942976  282279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:10.014802  282279 api_server.go:71] duration metric: took 1.081741817s to wait for apiserver process to appear ...
	I0108 21:33:10.014831  282279 api_server.go:87] waiting for apiserver healthz status ...
	I0108 21:33:10.014843  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:10.205696  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:12.206601  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:14.706422  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:13.540654  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:33:13.540692  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:33:14.041349  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.045672  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.045695  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:14.540838  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:14.545990  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 21:33:14.546035  282279 api_server.go:102] status: https://192.168.67.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 21:33:15.041627  282279 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8444/healthz ...
	I0108 21:33:15.046572  282279 api_server.go:278] https://192.168.67.2:8444/healthz returned 200:
	ok
	I0108 21:33:15.052817  282279 api_server.go:140] control plane version: v1.25.3
	I0108 21:33:15.052839  282279 api_server.go:130] duration metric: took 5.038002036s to wait for apiserver health ...
	I0108 21:33:15.052848  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:33:15.052854  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:33:15.055132  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:33:13.484537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.484590  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:15.056590  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:33:15.060305  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:33:15.060320  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:33:15.073482  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:33:15.711930  282279 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:15.718666  282279 system_pods.go:59] 9 kube-system pods found
	I0108 21:33:15.718695  282279 system_pods.go:61] "coredns-565d847f94-fd94f" [08c29923-1e9a-4576-884b-e79485bdb24e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718706  282279 system_pods.go:61] "etcd-default-k8s-diff-port-211952" [4d6fe94c-75ef-40cf-b1c1-2377203f2503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:33:15.718714  282279 system_pods.go:61] "kindnet-52cqk" [4ae6659c-e68a-492e-9e3f-5ffb047114c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:33:15.718719  282279 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-211952" [e7f5a5bc-2f08-46ed-b8e1-1551fa29d27c] Running
	I0108 21:33:15.718728  282279 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-211952" [28c6bf68-0f27-494d-9102-fc669542c4a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:33:15.718735  282279 system_pods.go:61] "kube-proxy-hz8lw" [fa7c0714-1e45-4256-9383-976e79d1e49e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:33:15.718742  282279 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-211952" [645cd11b-9e55-47fe-aa43-f3b702c95c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:33:15.718751  282279 system_pods.go:61] "metrics-server-5c8fd5cf8-l2hp5" [bcd90320-490a-4343-abcb-f40aa375512e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718757  282279 system_pods.go:61] "storage-provisioner" [ad01ceaf-4269-4a54-b47e-b56d85e14354] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 21:33:15.718765  282279 system_pods.go:74] duration metric: took 6.815857ms to wait for pod list to return data ...
	I0108 21:33:15.718772  282279 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:33:15.721658  282279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 21:33:15.721678  282279 node_conditions.go:123] node cpu capacity is 8
	I0108 21:33:15.721690  282279 node_conditions.go:105] duration metric: took 2.910879ms to run NodePressure ...
	I0108 21:33:15.721709  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:33:15.850359  282279 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854037  282279 kubeadm.go:778] kubelet initialised
	I0108 21:33:15.854056  282279 kubeadm.go:779] duration metric: took 3.67496ms waiting for restarted kubelet to initialise ...
	I0108 21:33:15.854063  282279 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:15.859567  282279 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:17.864672  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.205815  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.206912  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:17.485768  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.985283  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:19.865551  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.365227  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:21.706078  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:23.706755  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:22.485377  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.984649  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:24.865051  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.364362  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:25.706795  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:28.206074  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:27.484652  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.484907  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.985181  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:29.365262  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:31.864536  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:30.206547  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:32.705805  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:34.484659  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.985157  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:33.865545  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:36.364706  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:35.205900  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:37.206575  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.706410  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:39.484405  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:41.485144  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:38.366314  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:40.865544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:42.205820  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:44.206429  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.985033  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.985104  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:43.364368  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:45.365457  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.865583  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:46.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:49.206474  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:47.985130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.484792  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:50.365374  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.865225  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:51.206583  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:53.706500  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:52.984520  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:54.984810  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:55.364623  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.365130  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:56.205754  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:58.206523  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:57.484534  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.984319  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:01.985026  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:33:59.865408  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:02.364929  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:00.706734  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:03.206405  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.485051  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:06.984884  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:04.864561  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.366326  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:05.706010  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:07.706288  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:08.985455  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:11.485043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:09.865391  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.364526  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:10.206460  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:12.705615  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.706005  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:13.984826  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.484152  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:14.364606  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:16.365289  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:17.206712  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:19.705849  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.485537  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:18.864582  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:20.865195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.865407  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:21.706525  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.206204  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:22.984564  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:24.984654  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:25.364979  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.365790  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:26.206664  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:28.705923  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:27.485200  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.984779  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.984961  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:29.865042  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:31.865310  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:30.705966  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:32.706184  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:34.706518  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.985148  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.484872  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:33.865432  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.365146  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:36.706768  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:39.205866  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.485130  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:40.984717  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:38.865173  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.364499  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:41.705813  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.706112  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.484553  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.984290  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:43.365079  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:45.365570  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.865054  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:46.206566  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:48.706606  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:47.984724  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.484463  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:50.365544  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.864342  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:51.206067  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:53.206386  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:52.484509  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.484628  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.984663  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:54.865174  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:56.865226  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:55.705777  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.206536  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:58.985043  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.985441  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:34:59.365717  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:01.865247  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:00.705686  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:02.706281  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.484874  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.485178  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:03.865438  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:06.365588  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:05.206221  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.206742  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.706286  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:07.485379  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:09.485491  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.985421  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:08.865293  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:11.364853  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:12.205938  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.206587  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:14.484834  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.984217  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:13.864458  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:15.865297  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:16.706511  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:19.206844  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.985241  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.485361  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:18.364605  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:20.365307  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:22.865280  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:21.706576  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:24.206264  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:23.984764  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.984921  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:25.365211  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:27.865212  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:26.706631  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.205837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:28.485111  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:30.984944  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:29.865294  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:32.365083  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:31.206819  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.706459  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:33.485037  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:35.984758  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:34.864627  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.865632  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:36.206617  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:38.705904  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:37.984809  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.984942  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.985321  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:39.365282  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:41.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:40.706491  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.206589  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:44.484609  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.985153  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:43.865525  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:46.364697  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:45.705645  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:47.705922  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.706709  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:49.484711  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:51.485242  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:48.365304  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:50.865062  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:52.206076  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:54.206636  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.984904  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.985190  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:53.364585  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:55.866756  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:56.706242  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.706485  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.484404  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.485044  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:35:58.365278  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.864694  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.865305  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:00.706662  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:03.206301  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:02.485191  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:04.984589  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.365592  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.865076  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:05.705915  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.706822  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:07.484499  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:09.985336  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.364594  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.365393  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:10.206345  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.206780  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.705921  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:12.485725  278286 pod_ready.go:102] pod "coredns-565d847f94-jw8vf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:38 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:14.982268  278286 pod_ready.go:81] duration metric: took 4m0.003125371s waiting for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:14.982291  278286 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-jw8vf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:14.982340  278286 pod_ready.go:38] duration metric: took 4m0.007969001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:14.982370  278286 kubeadm.go:631] restartCluster took 4m10.8124082s
	W0108 21:36:14.982580  278286 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:14.982625  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:17.712121  278286 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.729470949s)
	I0108 21:36:17.712185  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:17.722197  278286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:17.729255  278286 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:17.729298  278286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:17.736461  278286 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:17.736503  278286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:17.776074  278286 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:36:17.776141  278286 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:17.803264  278286 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:17.803362  278286 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:17.803405  278286 kubeadm.go:317] OS: Linux
	I0108 21:36:17.803445  278286 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:17.803517  278286 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:17.803559  278286 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:17.803599  278286 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:17.803644  278286 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:17.803713  278286 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:17.803782  278286 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:36:17.803823  278286 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:36:17.803861  278286 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:36:17.868509  278286 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:17.868640  278286 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:17.868786  278286 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:17.980682  278286 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:14.864781  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:16.865103  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:17.985661  278286 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:17.985801  278286 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:17.985902  278286 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:17.986004  278286 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:17.986091  278286 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:17.986183  278286 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:17.986259  278286 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:17.986341  278286 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:17.986417  278286 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:17.986542  278286 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:17.986649  278286 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:17.986701  278286 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:17.986780  278286 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:18.059736  278286 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:18.157820  278286 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:18.409007  278286 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:18.508551  278286 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:18.520890  278286 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:18.521889  278286 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:18.521949  278286 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:36:18.609158  278286 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:16.706837  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:19.206362  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:18.611390  278286 out.go:204]   - Booting up control plane ...
	I0108 21:36:18.611574  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:18.612908  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:18.613799  278286 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:18.614568  278286 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:18.616788  278286 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:18.865230  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:20.865904  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:21.705735  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:23.706244  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:24.619697  278286 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002882 seconds
	I0108 21:36:24.619903  278286 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:24.627998  278286 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:25.143041  278286 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:25.143241  278286 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-211859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:36:25.650094  278286 kubeadm.go:317] [bootstrap-token] Using token: 0hs0sx.2quwwfjv2ljr7rle
	I0108 21:36:25.651809  278286 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:25.651961  278286 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:25.654307  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:36:25.658950  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:25.660952  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:25.662921  278286 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:25.664784  278286 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:25.671893  278286 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:36:25.864621  278286 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:26.057684  278286 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:26.058669  278286 kubeadm.go:317] 
	I0108 21:36:26.058754  278286 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:26.058765  278286 kubeadm.go:317] 
	I0108 21:36:26.058853  278286 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:26.058869  278286 kubeadm.go:317] 
	I0108 21:36:26.058904  278286 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:26.058983  278286 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:26.059054  278286 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:26.059063  278286 kubeadm.go:317] 
	I0108 21:36:26.059140  278286 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:36:26.059150  278286 kubeadm.go:317] 
	I0108 21:36:26.059219  278286 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:36:26.059229  278286 kubeadm.go:317] 
	I0108 21:36:26.059298  278286 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:26.059393  278286 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:26.059498  278286 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:26.059510  278286 kubeadm.go:317] 
	I0108 21:36:26.059614  278286 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:36:26.059726  278286 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:26.059744  278286 kubeadm.go:317] 
	I0108 21:36:26.059848  278286 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.059981  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:26.060005  278286 kubeadm.go:317] 	--control-plane 
	I0108 21:36:26.060009  278286 kubeadm.go:317] 
	I0108 21:36:26.060140  278286 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:26.060156  278286 kubeadm.go:317] 
	I0108 21:36:26.060242  278286 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 0hs0sx.2quwwfjv2ljr7rle \
	I0108 21:36:26.060344  278286 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:26.061999  278286 kubeadm.go:317] W0108 21:36:17.771186    3316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:36:26.062209  278286 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:26.062331  278286 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:26.062355  278286 cni.go:95] Creating CNI manager for ""
	I0108 21:36:26.062365  278286 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:26.064570  278286 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:26.066293  278286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:26.112674  278286 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:36:26.112695  278286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:26.128247  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:26.801006  278286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:26.801092  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.801100  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=no-preload-211859 minikube.k8s.io/updated_at=2023_01_08T21_36_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:26.808849  278286 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:26.928188  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:23.365451  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.365511  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.864750  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:25.706512  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:28.206205  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:27.522837  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.022542  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:28.522922  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.022368  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.522328  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.022929  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:30.523064  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.022221  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:31.522993  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:32.022733  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:29.865401  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:31.865613  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:30.207607  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.705941  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:34.706614  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:32.522593  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.022409  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:33.522830  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.022514  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.522961  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.023204  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:35.523260  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.022528  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:36.522928  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:37.022841  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:34.364509  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:36.364566  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:37.523049  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.022536  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.522834  278286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:38.586979  278286 kubeadm.go:1067] duration metric: took 11.78594385s to wait for elevateKubeSystemPrivileges.
	I0108 21:36:38.587009  278286 kubeadm.go:398] StartCluster complete in 4m34.458658123s
	I0108 21:36:38.587037  278286 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:38.587148  278286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:36:38.588149  278286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:39.105452  278286 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-211859" rescaled to 1
	I0108 21:36:39.105521  278286 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:36:39.107702  278286 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:39.105557  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:39.105612  278286 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:36:39.105739  278286 config.go:180] Loaded profile config "no-preload-211859": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:36:39.109968  278286 addons.go:65] Setting storage-provisioner=true in profile "no-preload-211859"
	I0108 21:36:39.109979  278286 addons.go:65] Setting default-storageclass=true in profile "no-preload-211859"
	I0108 21:36:39.109999  278286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:39.110001  278286 addons.go:227] Setting addon storage-provisioner=true in "no-preload-211859"
	I0108 21:36:39.110004  278286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-211859"
	W0108 21:36:39.110010  278286 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:36:39.110055  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109970  278286 addons.go:65] Setting dashboard=true in profile "no-preload-211859"
	I0108 21:36:39.110159  278286 addons.go:227] Setting addon dashboard=true in "no-preload-211859"
	W0108 21:36:39.110169  278286 addons.go:236] addon dashboard should already be in state true
	I0108 21:36:39.110200  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.109981  278286 addons.go:65] Setting metrics-server=true in profile "no-preload-211859"
	I0108 21:36:39.110261  278286 addons.go:227] Setting addon metrics-server=true in "no-preload-211859"
	W0108 21:36:39.110276  278286 addons.go:236] addon metrics-server should already be in state true
	I0108 21:36:39.110330  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.110352  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110511  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110572  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.110706  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.151624  278286 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.153337  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:39.153355  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:39.153407  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.155756  278286 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:39.157349  278286 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.157371  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:39.157418  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.160291  278286 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:36:39.157827  278286 addons.go:227] Setting addon default-storageclass=true in "no-preload-211859"
	W0108 21:36:39.162099  278286 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:36:39.162135  278286 host.go:66] Checking if "no-preload-211859" exists ...
	I0108 21:36:39.162607  278286 cli_runner.go:164] Run: docker container inspect no-preload-211859 --format={{.State.Status}}
	I0108 21:36:39.164649  278286 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:36:37.206095  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.206996  274657 pod_ready.go:102] pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:19:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:39.166241  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:36:39.166260  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:36:39.166314  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.193544  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.199785  278286 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.199812  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:39.199862  278286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-211859
	I0108 21:36:39.205498  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.208611  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.231311  278286 node_ready.go:35] waiting up to 6m0s for node "no-preload-211859" to be "Ready" ...
	I0108 21:36:39.231694  278286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:36:39.240040  278286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/no-preload-211859/id_rsa Username:docker}
	I0108 21:36:39.426253  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:39.426846  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:36:39.426865  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:36:39.436437  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:39.438425  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:39.438452  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:36:39.523837  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:36:39.523905  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:36:39.532411  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:39.532499  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:39.615631  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:36:39.615719  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:36:39.626445  278286 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.626521  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:39.639382  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:36:39.639451  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:36:39.725135  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:39.731545  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:36:39.731573  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:36:39.827181  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:36:39.827289  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:36:39.917954  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:36:39.917981  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:36:40.011154  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:36:40.011186  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:36:40.017536  278286 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0108 21:36:40.033803  278286 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.033827  278286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:36:40.117534  278286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:36:40.522822  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096529518s)
	I0108 21:36:40.522881  278286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086407927s)
	I0108 21:36:40.714945  278286 addons.go:457] Verifying addon metrics-server=true in "no-preload-211859"
	I0108 21:36:41.016673  278286 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-211859 addons enable metrics-server	
	
	
	I0108 21:36:41.018352  278286 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0108 21:36:41.019949  278286 addons.go:488] enableAddons completed in 1.914342148s
	I0108 21:36:41.239026  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:41.203867  274657 pod_ready.go:81] duration metric: took 4m0.002306196s waiting for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.203901  274657 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-lm49s" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:36:41.203940  274657 pod_ready.go:38] duration metric: took 4m0.006906053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.203967  274657 kubeadm.go:631] restartCluster took 5m9.671476322s
	W0108 21:36:41.204176  274657 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:36:41.204211  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:36:42.410951  274657 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.206714622s)
	I0108 21:36:42.411034  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.420761  274657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.427895  274657 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:36:42.427942  274657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.434476  274657 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.434514  274657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:36:42.479014  274657 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 21:36:42.479075  274657 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:36:42.506527  274657 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:36:42.506650  274657 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:36:42.506722  274657 kubeadm.go:317] OS: Linux
	I0108 21:36:42.506775  274657 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:36:42.506836  274657 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:36:42.506895  274657 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:36:42.506970  274657 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:36:42.507042  274657 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:36:42.507115  274657 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:36:42.575244  274657 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:36:42.575356  274657 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:36:42.575464  274657 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:36:42.705716  274657 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:42.707322  274657 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:42.714364  274657 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 21:36:42.788896  274657 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:38.365195  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:40.864900  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.865124  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:42.793301  274657 out.go:204]   - Generating certificates and keys ...
	I0108 21:36:42.793445  274657 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:36:42.793584  274657 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.793709  274657 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.793804  274657 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.793866  274657 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.793909  274657 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.793956  274657 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.794003  274657 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.794059  274657 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.794113  274657 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.794145  274657 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:36:42.794211  274657 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:42.938030  274657 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.019391  274657 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.165446  274657 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.296073  274657 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.296890  274657 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.298841  274657 out.go:204]   - Booting up control plane ...
	I0108 21:36:43.298961  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.303628  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.304561  274657 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.305309  274657 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.307378  274657 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:36:43.239329  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.239687  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:45.365383  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.865553  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:47.739338  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:49.739648  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:52.238824  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:51.810038  274657 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502593 seconds
	I0108 21:36:51.810181  274657 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:36:51.821149  274657 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:36:52.336468  274657 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:36:52.336653  274657 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-211828 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:36:52.842409  274657 kubeadm.go:317] [bootstrap-token] Using token: ayw1nu.phe95ebgibs3udtw
	I0108 21:36:52.844083  274657 out.go:204]   - Configuring RBAC rules ...
	I0108 21:36:52.844190  274657 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:36:52.847569  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:36:52.850422  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:36:52.852561  274657 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:36:52.854272  274657 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:36:52.894172  274657 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.257840  274657 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.258782  274657 kubeadm.go:317] 
	I0108 21:36:53.258856  274657 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:36:53.258871  274657 kubeadm.go:317] 
	I0108 21:36:53.258948  274657 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:36:53.258958  274657 kubeadm.go:317] 
	I0108 21:36:53.258988  274657 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:36:53.259068  274657 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:36:53.259119  274657 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:36:53.259126  274657 kubeadm.go:317] 
	I0108 21:36:53.259165  274657 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:36:53.259250  274657 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:36:53.259306  274657 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:36:53.259310  274657 kubeadm.go:317] 
	I0108 21:36:53.259383  274657 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:36:53.259441  274657 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:36:53.259446  274657 kubeadm.go:317] 
	I0108 21:36:53.259539  274657 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.259662  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:36:53.259688  274657 kubeadm.go:317]     --control-plane 	  
	I0108 21:36:53.259694  274657 kubeadm.go:317] 
	I0108 21:36:53.259813  274657 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:36:53.259829  274657 kubeadm.go:317] 
	I0108 21:36:53.259906  274657 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ayw1nu.phe95ebgibs3udtw \
	I0108 21:36:53.260017  274657 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:36:53.262215  274657 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:36:53.262352  274657 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:36:53.262389  274657 cni.go:95] Creating CNI manager for ""
	I0108 21:36:53.262399  274657 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:36:53.264329  274657 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:50.364823  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:52.865232  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:53.265737  274657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:53.269178  274657 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0108 21:36:53.269195  274657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:36:53.282457  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.488747  274657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:53.488820  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.488836  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=old-k8s-version-211828 minikube.k8s.io/updated_at=2023_01_08T21_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:53.570539  274657 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:53.570672  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.167787  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.667921  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:54.239313  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:56.739563  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:55.364998  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:57.365375  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:36:55.167437  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:55.667880  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.167390  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:56.667596  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.167755  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:57.667185  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.167862  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:58.667300  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.167329  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.667869  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:36:59.239207  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:01.738681  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:36:59.865037  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:02.364695  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:00.167819  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:00.668207  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.167287  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:01.668111  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.167785  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:02.667989  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.167539  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.667603  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.167676  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:04.667808  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:03.739097  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:05.739401  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:04.864908  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:07.365162  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:05.168182  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:05.667597  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.167537  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:06.667619  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.168108  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:07.668145  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.167448  274657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:08.262221  274657 kubeadm.go:1067] duration metric: took 14.773463011s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:08.262258  274657 kubeadm.go:398] StartCluster complete in 5m36.772809994s
	I0108 21:37:08.262281  274657 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.262401  274657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:08.263456  274657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:08.779968  274657 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-211828" rescaled to 1
	I0108 21:37:08.780035  274657 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:08.781734  274657 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:08.780090  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:08.780101  274657 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:08.780321  274657 config.go:180] Loaded profile config "old-k8s-version-211828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0108 21:37:08.783353  274657 addons.go:65] Setting dashboard=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783365  274657 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783367  274657 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783380  274657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:08.783385  274657 addons.go:227] Setting addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:08.783387  274657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-211828"
	W0108 21:37:08.783394  274657 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:08.783441  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783384  274657 addons.go:227] Setting addon dashboard=true in "old-k8s-version-211828"
	W0108 21:37:08.783526  274657 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:08.783568  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783356  274657 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-211828"
	I0108 21:37:08.783648  274657 addons.go:227] Setting addon storage-provisioner=true in "old-k8s-version-211828"
	W0108 21:37:08.783668  274657 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:08.783727  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.783776  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.783927  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784028  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.784133  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.794999  274657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:37:08.824991  274657 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.822967  274657 addons.go:227] Setting addon default-storageclass=true in "old-k8s-version-211828"
	W0108 21:37:08.825030  274657 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:08.825068  274657 host.go:66] Checking if "old-k8s-version-211828" exists ...
	I0108 21:37:08.826962  274657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:08.825542  274657 cli_runner.go:164] Run: docker container inspect old-k8s-version-211828 --format={{.State.Status}}
	I0108 21:37:08.828596  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:08.828602  274657 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:08.828610  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:08.828632  274657 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:08.830193  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:08.831697  274657 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:08.830251  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.828662  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.833415  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:08.833435  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:08.833477  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.865130  274657 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:08.865153  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:08.865262  274657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-211828
	I0108 21:37:08.870167  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.876829  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.891352  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.895346  274657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:08.901551  274657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/old-k8s-version-211828/id_rsa Username:docker}
	I0108 21:37:08.966952  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:08.966980  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:09.020839  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:09.020864  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:09.026679  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:09.026702  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:09.035881  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:09.036053  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:09.037460  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:09.037484  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:09.113665  274657 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.113699  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:09.126531  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:09.126566  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:09.132355  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:09.142671  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:09.142695  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:09.225954  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:09.225983  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:09.311794  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:09.311868  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:09.321460  274657 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0108 21:37:09.329750  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:09.329779  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:09.415014  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:09.415041  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:09.434577  274657 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.434608  274657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:09.450703  274657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:09.848961  274657 addons.go:457] Verifying addon metrics-server=true in "old-k8s-version-211828"
	I0108 21:37:10.258944  274657 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-211828 addons enable metrics-server	
	
	
	I0108 21:37:10.260902  274657 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:07.739683  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.740319  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:12.239302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:09.365405  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:11.865521  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:10.262484  274657 addons.go:488] enableAddons completed in 1.482385235s
	I0108 21:37:10.800978  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:13.301617  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:14.239339  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:16.239538  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:14.364973  282279 pod_ready.go:102] pod "coredns-565d847f94-fd94f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-08 21:20:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0108 21:37:15.862343  282279 pod_ready.go:81] duration metric: took 4m0.002735215s waiting for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:15.862365  282279 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-fd94f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:37:15.862410  282279 pod_ready.go:38] duration metric: took 4m0.008337756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:15.862442  282279 kubeadm.go:631] restartCluster took 4m10.846498869s
	W0108 21:37:15.862572  282279 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:37:15.862600  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0108 21:37:18.604264  282279 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.741643542s)
	I0108 21:37:18.604323  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:18.613785  282279 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:37:18.620707  282279 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 21:37:18.620756  282279 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:37:18.627110  282279 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:37:18.627161  282279 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 21:37:18.665230  282279 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 21:37:18.665379  282279 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 21:37:18.693390  282279 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0108 21:37:18.693485  282279 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1025-gcp
	I0108 21:37:18.693536  282279 kubeadm.go:317] OS: Linux
	I0108 21:37:18.693625  282279 kubeadm.go:317] CGROUPS_CPU: enabled
	I0108 21:37:18.693699  282279 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0108 21:37:18.693758  282279 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0108 21:37:18.693816  282279 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0108 21:37:18.693855  282279 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0108 21:37:18.693897  282279 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0108 21:37:18.693932  282279 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0108 21:37:18.693986  282279 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0108 21:37:18.694033  282279 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0108 21:37:18.757764  282279 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:37:18.757887  282279 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:37:18.757990  282279 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:37:18.880203  282279 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:37:18.885649  282279 out.go:204]   - Generating certificates and keys ...
	I0108 21:37:18.885786  282279 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 21:37:18.885859  282279 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 21:37:18.885942  282279 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:37:18.886014  282279 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:37:18.886108  282279 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:37:18.886194  282279 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 21:37:18.886282  282279 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:37:18.886366  282279 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:37:18.886464  282279 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:37:18.886537  282279 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:37:18.886603  282279 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 21:37:18.886705  282279 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:37:18.970116  282279 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:37:19.061650  282279 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:37:19.314844  282279 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:37:19.411377  282279 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:37:19.423013  282279 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:37:19.423842  282279 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:37:19.423907  282279 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 21:37:19.507274  282279 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:37:15.801234  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.301292  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:18.738947  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:20.739953  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:19.509473  282279 out.go:204]   - Booting up control plane ...
	I0108 21:37:19.509609  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:37:19.510392  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:37:19.511285  282279 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:37:19.512005  282279 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:37:19.514544  282279 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:37:20.301380  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:22.801865  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:25.517443  282279 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002884 seconds
	I0108 21:37:25.517596  282279 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:37:25.525842  282279 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:37:26.040802  282279 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:37:26.041035  282279 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-211952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:37:26.548645  282279 kubeadm.go:317] [bootstrap-token] Using token: e8jg3u.r5d9gog7fpwiofqp
	I0108 21:37:26.550383  282279 out.go:204]   - Configuring RBAC rules ...
	I0108 21:37:26.550517  282279 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:37:26.553632  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:37:26.561595  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:37:26.563603  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:37:26.566273  282279 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:37:26.569011  282279 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:37:26.577117  282279 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:37:26.777486  282279 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 21:37:26.956684  282279 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 21:37:26.957742  282279 kubeadm.go:317] 
	I0108 21:37:26.957841  282279 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 21:37:26.957852  282279 kubeadm.go:317] 
	I0108 21:37:26.957946  282279 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 21:37:26.957959  282279 kubeadm.go:317] 
	I0108 21:37:26.957992  282279 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 21:37:26.958072  282279 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:37:26.958151  282279 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:37:26.958161  282279 kubeadm.go:317] 
	I0108 21:37:26.958244  282279 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 21:37:26.958255  282279 kubeadm.go:317] 
	I0108 21:37:26.958324  282279 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:37:26.958334  282279 kubeadm.go:317] 
	I0108 21:37:26.958411  282279 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 21:37:26.958519  282279 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:37:26.958614  282279 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:37:26.958627  282279 kubeadm.go:317] 
	I0108 21:37:26.958736  282279 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:37:26.958873  282279 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 21:37:26.958895  282279 kubeadm.go:317] 
	I0108 21:37:26.958993  282279 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959108  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 \
	I0108 21:37:26.959144  282279 kubeadm.go:317] 	--control-plane 
	I0108 21:37:26.959155  282279 kubeadm.go:317] 
	I0108 21:37:26.959279  282279 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:37:26.959295  282279 kubeadm.go:317] 
	I0108 21:37:26.959387  282279 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token e8jg3u.r5d9gog7fpwiofqp \
	I0108 21:37:26.959591  282279 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:f049ca7232078b3b5811fda028669b6aa473e8a1e17a076548c5e610af25ced8 
	I0108 21:37:27.010668  282279 kubeadm.go:317] W0108 21:37:18.659761    3310 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0108 21:37:27.010963  282279 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
	I0108 21:37:27.011109  282279 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:37:27.011143  282279 cni.go:95] Creating CNI manager for ""
	I0108 21:37:27.011161  282279 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 21:37:27.013790  282279 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:37:23.239090  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:25.239428  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:27.016436  282279 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:37:27.020247  282279 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 21:37:27.020267  282279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 21:37:27.033939  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:37:27.773746  282279 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:37:27.773820  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.773829  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=default-k8s-diff-port-211952 minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:27.858069  282279 ops.go:34] apiserver oom_adj: -16
	I0108 21:37:27.858162  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:25.301674  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.801420  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:27.738878  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:29.739083  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:31.739252  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:28.451616  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:28.951553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.451725  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:29.950766  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.450878  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.951743  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.450739  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:31.951303  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.450882  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:32.951389  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:30.301599  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:32.800759  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:33.739342  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:36.238973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:33.451553  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:33.951640  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.451179  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:34.951522  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.450753  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.950904  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.450992  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:36.951610  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.451311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:37.951081  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:35.301523  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:37.800886  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:38.451124  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:38.951311  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.451052  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:39.951786  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.450906  282279 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:37:40.622559  282279 kubeadm.go:1067] duration metric: took 12.848793735s to wait for elevateKubeSystemPrivileges.
	I0108 21:37:40.622595  282279 kubeadm.go:398] StartCluster complete in 4m35.649555324s
	I0108 21:37:40.622614  282279 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:40.622704  282279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:37:40.623799  282279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:37:41.138673  282279 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-211952" rescaled to 1
	I0108 21:37:41.138736  282279 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 21:37:41.138753  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:37:41.141673  282279 out.go:177] * Verifying Kubernetes components...
	I0108 21:37:41.138793  282279 addons.go:486] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0108 21:37:41.138974  282279 config.go:180] Loaded profile config "default-k8s-diff-port-211952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:37:41.143598  282279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:41.143622  282279 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143643  282279 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143652  282279 addons.go:236] addon storage-provisioner should already be in state true
	I0108 21:37:41.143672  282279 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143694  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.143696  282279 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-211952"
	I0108 21:37:41.143742  282279 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143751  282279 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-211952"
	I0108 21:37:41.143771  282279 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.143780  282279 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-211952"
	W0108 21:37:41.143797  282279 addons.go:236] addon dashboard should already be in state true
	I0108 21:37:41.143841  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	W0108 21:37:41.143781  282279 addons.go:236] addon metrics-server should already be in state true
	I0108 21:37:41.143915  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.144018  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144222  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144229  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.144299  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.184041  282279 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 21:37:41.186236  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:37:41.186259  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:37:41.183770  282279 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-211952"
	I0108 21:37:41.186311  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	W0108 21:37:41.186320  282279 addons.go:236] addon default-storageclass should already be in state true
	I0108 21:37:41.186356  282279 host.go:66] Checking if "default-k8s-diff-port-211952" exists ...
	I0108 21:37:41.187948  282279 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:37:41.186812  282279 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-211952 --format={{.State.Status}}
	I0108 21:37:41.191003  282279 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 21:37:41.189639  282279 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.192705  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:37:41.192773  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.195052  282279 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 21:37:38.239104  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:40.239437  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:41.196683  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 21:37:41.196706  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 21:37:41.196763  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.221516  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.226288  282279 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.226312  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:37:41.226392  282279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-211952
	I0108 21:37:41.226595  282279 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:37:41.226958  282279 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:37:41.233899  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.236188  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.261350  282279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/default-k8s-diff-port-211952/id_rsa Username:docker}
	I0108 21:37:41.328029  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:37:41.328055  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 21:37:41.410390  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:37:41.410477  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:37:41.429903  282279 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.429978  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:37:41.431528  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:37:41.434596  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:37:41.435835  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 21:37:41.435891  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 21:37:41.518039  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:37:41.525611  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 21:37:41.525635  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 21:37:41.617739  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 21:37:41.617770  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 21:37:41.710400  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 21:37:41.710430  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 21:37:41.733619  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 21:37:41.733650  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 21:37:41.913693  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 21:37:41.913722  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 21:37:41.923702  282279 start.go:826] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0108 21:37:41.939574  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 21:37:41.939602  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 21:37:42.033056  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 21:37:42.033090  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 21:37:42.126252  282279 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.126280  282279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 21:37:42.219356  282279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 21:37:42.612393  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.177754873s)
	I0108 21:37:42.649146  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131058374s)
	I0108 21:37:42.649245  282279 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-211952"
	I0108 21:37:43.233589  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:43.519132  282279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299673532s)
	I0108 21:37:43.521195  282279 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-211952 addons enable metrics-server	
	
	
	I0108 21:37:43.523337  282279 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0108 21:37:39.801595  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:41.801850  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:44.301445  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:42.739717  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:45.239105  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:43.525339  282279 addons.go:488] enableAddons completed in 2.386543882s
	I0108 21:37:45.732797  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:47.733580  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:46.800798  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:48.800989  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:47.738847  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:49.739115  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:52.238899  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:50.232935  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:52.233798  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:50.801073  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:52.801144  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:54.239128  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:56.739014  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:54.733016  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:56.733874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:55.301797  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:57.801274  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:37:59.239171  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:01.239292  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:37:59.233003  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:01.233346  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:03.233665  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:37:59.801607  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:02.300746  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:04.301290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:03.738362  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:05.233897  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:07.234180  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:06.801829  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:09.301092  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:07.739372  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:10.239775  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:09.733403  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.733914  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:11.301300  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:13.800777  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:12.739231  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:15.238970  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:14.233667  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:16.732749  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:15.801406  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.801519  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:17.738673  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:19.738980  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:22.238583  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:18.733049  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:20.734111  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:23.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:19.801620  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:22.301152  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:24.239366  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:26.738352  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:25.233967  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:27.732889  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:24.801117  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:27.300926  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:29.301266  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:28.739245  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:31.238599  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:29.733825  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:32.234140  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:31.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.800917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:33.239230  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:35.738754  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:34.733077  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:36.733560  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:35.801221  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:37.801365  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:38.239549  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:40.738973  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:38.733737  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:41.232994  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:43.233767  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:40.300687  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.301352  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:44.301680  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:42.739381  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.238776  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:47.238948  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:45.233859  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:47.733544  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:46.801357  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:48.801472  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:49.739156  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:52.239344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:49.733766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:52.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:51.300633  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:53.301297  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:54.239534  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:56.738615  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:54.233916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:56.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:38:55.801671  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.301397  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:38:58.738759  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:00.739100  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:38:58.734209  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:01.232932  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:03.233020  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:00.801536  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.300754  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:03.239262  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.739203  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:05.233361  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:07.233770  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:05.301375  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:07.800934  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:08.239116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:10.239161  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:09.733072  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:11.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:09.801368  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.301198  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:12.738523  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.739235  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:17.239112  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:14.233759  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:16.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:14.801261  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:17.300721  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.301075  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:19.738653  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:21.738764  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:18.733878  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.233705  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:21.301289  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.301516  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:23.738915  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:26.239205  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:23.733860  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:26.233091  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:28.233460  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:25.801475  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.301549  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:28.239272  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.738619  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:30.733105  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:32.734009  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:30.800660  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:33.301504  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:32.739223  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.238771  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:37.238972  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:35.233611  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:37.733328  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:35.801029  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:37.801500  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:39.239140  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:41.739302  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:39.733731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:42.233801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:40.301529  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:42.800621  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:44.238840  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:46.239243  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:44.733038  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:46.733391  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:44.801100  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:47.300450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:49.301320  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:48.739022  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:51.238630  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:49.233954  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.733795  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:51.801285  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.801488  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:53.739288  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:56.239051  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:54.234004  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.733167  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:39:56.301044  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.800845  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:39:58.738520  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:00.739017  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:39:59.233766  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.733686  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:01.301450  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:03.301533  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:02.739209  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.739248  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:06.739344  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:04.233335  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:06.233688  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:08.233796  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:05.800709  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:07.801022  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:09.239054  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:11.739385  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:10.233869  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:12.733211  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:10.300739  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:12.301541  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:14.239654  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:16.739048  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:15.233047  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:17.733710  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:14.801253  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:16.801334  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:18.801736  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:19.238509  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:21.238761  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:20.232874  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:22.232916  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:21.301555  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.800846  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:23.239162  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:25.239455  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:27.240625  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:24.233476  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:26.733575  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:25.801246  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:28.301212  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:29.739116  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:31.739148  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:28.733746  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:31.233731  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:33.233890  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:30.301480  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:32.800970  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:34.238950  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:36.239143  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:35.733135  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:37.733332  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:38.738709  278286 node_ready.go:58] node "no-preload-211859" has status "Ready":"False"
	I0108 21:40:39.241032  278286 node_ready.go:38] duration metric: took 4m0.009684254s waiting for node "no-preload-211859" to be "Ready" ...
	I0108 21:40:39.243691  278286 out.go:177] 
	W0108 21:40:39.245553  278286 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:40:39.245570  278286 out.go:239] * 
	W0108 21:40:39.246458  278286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:40:39.249123  278286 out.go:177] 
	I0108 21:40:35.300833  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:37.801290  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:40.233285  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:42.234025  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:40.300917  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:42.301122  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.301723  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:44.733707  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:47.232740  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:46.801299  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:48.801395  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:49.233976  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.733761  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:51.301336  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:53.301705  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:54.233585  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:56.233841  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:40:55.801251  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.301027  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:40:58.733149  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:01.233702  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:03.233901  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:00.301463  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:02.801220  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:05.733569  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:08.233143  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:04.801563  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:07.301530  274657 node_ready.go:58] node "old-k8s-version-211828" has status "Ready":"False"
	I0108 21:41:08.802728  274657 node_ready.go:38] duration metric: took 4m0.007692604s waiting for node "old-k8s-version-211828" to be "Ready" ...
	I0108 21:41:08.805120  274657 out.go:177] 
	W0108 21:41:08.806709  274657 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:08.806733  274657 out.go:239] * 
	W0108 21:41:08.807656  274657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:08.809434  274657 out.go:177] 
	I0108 21:41:10.234013  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:12.733801  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:15.233487  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:17.233814  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:19.233917  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:21.234234  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:23.732866  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:25.733792  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:27.734348  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:30.233612  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:32.233852  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:34.233919  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:36.733239  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:38.733765  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.233693  282279 node_ready.go:58] node "default-k8s-diff-port-211952" has status "Ready":"False"
	I0108 21:41:41.235775  282279 node_ready.go:38] duration metric: took 4m0.009149141s waiting for node "default-k8s-diff-port-211952" to be "Ready" ...
	I0108 21:41:41.238174  282279 out.go:177] 
	W0108 21:41:41.239722  282279 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0108 21:41:41.239744  282279 out.go:239] * 
	W0108 21:41:41.240644  282279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:41:41.242421  282279 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	981f399fbc5ac       d6e3e26021b60       57 seconds ago      Running             kindnet-cni               4                   b1608797a9dec
	ad37848bda844       d6e3e26021b60       4 minutes ago       Exited              kindnet-cni               3                   b1608797a9dec
	7e39d325fcec3       beaaf00edd38a       13 minutes ago      Running             kube-proxy                0                   d34fff239d9ab
	48fea364952d6       0346dbd74bcb9       13 minutes ago      Running             kube-apiserver            2                   626297ce42f86
	abdda2bcae93a       6d23ec0e8b87e       13 minutes ago      Running             kube-scheduler            2                   666f3069f4728
	e3c428ddf8ccc       6039992312758       13 minutes ago      Running             kube-controller-manager   2                   d6ec92c293591
	b4a61910cd1f4       a8a176a5d5d69       13 minutes ago      Running             etcd                      2                   2f2f9f37ad42e
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sun 2023-01-08 21:32:49 UTC, end at Sun 2023-01-08 21:50:44 UTC. --
	Jan 08 21:43:04 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:43:04.601223296Z" level=info msg="RemoveContainer for \"ab48a6e41abea1dbd6e0ebadbc510273e8cb1d053d95888dd039f42e5a79bde1\" returns successfully"
	Jan 08 21:43:17 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:43:17.923702446Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Jan 08 21:43:17 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:43:17.937318633Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2\""
	Jan 08 21:43:17 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:43:17.937895499Z" level=info msg="StartContainer for \"0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2\""
	Jan 08 21:43:18 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:43:18.025256404Z" level=info msg="StartContainer for \"0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2\" returns successfully"
	Jan 08 21:45:58 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:45:58.427212104Z" level=info msg="shim disconnected" id=0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2
	Jan 08 21:45:58 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:45:58.427275728Z" level=warning msg="cleaning up after shim disconnected" id=0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2 namespace=k8s.io
	Jan 08 21:45:58 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:45:58.427289078Z" level=info msg="cleaning up dead shim"
	Jan 08 21:45:58 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:45:58.435832976Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:45:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5273 runtime=io.containerd.runc.v2\n"
	Jan 08 21:45:58 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:45:58.913305262Z" level=info msg="RemoveContainer for \"c5e714c6bc655a1f19adbc135b885bb98afbe98fa6867b6a63c0de732f8effaf\""
	Jan 08 21:45:58 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:45:58.918269836Z" level=info msg="RemoveContainer for \"c5e714c6bc655a1f19adbc135b885bb98afbe98fa6867b6a63c0de732f8effaf\" returns successfully"
	Jan 08 21:46:24 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:46:24.923435896Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Jan 08 21:46:24 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:46:24.936242482Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074\""
	Jan 08 21:46:24 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:46:24.936731187Z" level=info msg="StartContainer for \"ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074\""
	Jan 08 21:46:25 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:46:25.026015449Z" level=info msg="StartContainer for \"ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074\" returns successfully"
	Jan 08 21:49:05 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:05.455997708Z" level=info msg="shim disconnected" id=ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074
	Jan 08 21:49:05 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:05.456059272Z" level=warning msg="cleaning up after shim disconnected" id=ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074 namespace=k8s.io
	Jan 08 21:49:05 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:05.456069378Z" level=info msg="cleaning up dead shim"
	Jan 08 21:49:05 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:05.464819718Z" level=warning msg="cleanup warnings time=\"2023-01-08T21:49:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5390 runtime=io.containerd.runc.v2\n"
	Jan 08 21:49:06 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:06.243497182Z" level=info msg="RemoveContainer for \"0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2\""
	Jan 08 21:49:06 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:06.249421053Z" level=info msg="RemoveContainer for \"0d3bf8028bafa3c73147845031cbbca9bdf9d6f7e28e4761aa34b9552109bae2\" returns successfully"
	Jan 08 21:49:46 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:46.923744962Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Jan 08 21:49:46 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:46.936074993Z" level=info msg="CreateContainer within sandbox \"b1608797a9deca27ca99a6bd8949e5707d38c9f2c1615b72d94e754aa0a663a6\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"981f399fbc5ac913ce1916862c76499628445746311274f12f58b4d79bb6160e\""
	Jan 08 21:49:46 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:46.936468322Z" level=info msg="StartContainer for \"981f399fbc5ac913ce1916862c76499628445746311274f12f58b4d79bb6160e\""
	Jan 08 21:49:46 default-k8s-diff-port-211952 containerd[386]: time="2023-01-08T21:49:46.998306324Z" level=info msg="StartContainer for \"981f399fbc5ac913ce1916862c76499628445746311274f12f58b4d79bb6160e\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-211952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-211952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=default-k8s-diff-port-211952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T21_37_27_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:37:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-211952
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:50:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:47:49 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:47:49 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:47:49 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 08 Jan 2023 21:47:49 +0000   Sun, 08 Jan 2023 21:37:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-diff-port-211952
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                fe5ecc0a-a17f-4998-8022-5b0438ac303f
	  Boot ID:                    abb1671c-ddf5-4694-bdc8-1024e5cc0b18
	  Kernel Version:             5.15.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.10
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-211952                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-8s5wp                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-211952             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-211952    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-plrbr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-211952             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-diff-port-211952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-diff-port-211952 event: Registered Node default-k8s-diff-port-211952 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +2.971851] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027844] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[  +1.027909] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 3d 08 ad 3b 15 08 06
	[Jan 8 21:19] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.006215] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023951] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.967852] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.035798] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.023925] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +2.940341] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.027361] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	[  +1.019905] IPv4: martian source 10.244.0.125 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 e9 40 8d 06 87 08 06
	
	* 
	* ==> etcd [b4a61910cd1f4724239205e8f7baa67961a32f0b087a58f553451cf3eb6d76e9] <==
	* {"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:37:21.040Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-01-08T21:37:21.630Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-diff-port-211952 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:37:21.631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:37:21.632Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-01-08T21:37:21.632Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:47:21.972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":526}
	{"level":"info","ts":"2023-01-08T21:47:21.973Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":526,"took":"487.239µs"}
	
	* 
	* ==> kernel <==
	*  21:50:44 up  1:33,  0 users,  load average: 0.15, 0.24, 0.53
	Linux default-k8s-diff-port-211952 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [48fea364952d660231e61de96e258c90261216a9125cef423faa8556528853bf] <==
	* W0108 21:45:25.129103       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:45:25.129192       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:45:25.129206       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:47:25.133303       1 handler_proxy.go:105] no RequestInfo found in the context
	W0108 21:47:25.133305       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:47:25.133344       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:47:25.133378       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0108 21:47:25.133414       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:47:25.134541       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:48:25.133975       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:48:25.134012       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:48:25.134029       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:48:25.135146       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:48:25.135223       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:48:25.135248       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:50:25.134932       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:50:25.134983       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:50:25.134992       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:50:25.136083       1 handler_proxy.go:105] no RequestInfo found in the context
	E0108 21:50:25.136150       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:50:25.136161       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e3c428ddf8cccb4e892d7504dcff11cf2d810ed67e3e1ddfc5fe90992e47e910] <==
	* W0108 21:44:41.044567       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:45:10.568784       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:45:11.054550       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:45:40.574506       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:45:41.065932       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:46:10.581237       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:46:11.077997       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:46:40.587599       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:46:41.087706       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:47:10.593892       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:47:11.098974       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:47:40.600159       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:47:41.110416       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:48:10.605610       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:48:11.121530       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:48:40.611500       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:48:41.132153       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:49:10.616773       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:49:11.142043       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:49:40.622852       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:49:41.152987       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:50:10.629355       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:50:11.163967       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:50:40.634354       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:50:41.174756       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [7e39d325fcec3c380ec395f031e958aba04ee24a6c69bb6f1a8b7b45ee7def8a] <==
	* I0108 21:37:42.315555       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0108 21:37:42.315778       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0108 21:37:42.315808       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:37:42.410324       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:37:42.410383       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 21:37:42.410396       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 21:37:42.410417       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 21:37:42.410457       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:37:42.410966       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:37:42.411220       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:37:42.411233       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:37:42.412192       1 config.go:444] "Starting node config controller"
	I0108 21:37:42.412209       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:37:42.412114       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:37:42.412238       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:37:42.412476       1 config.go:317] "Starting service config controller"
	I0108 21:37:42.412506       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:37:42.512504       1 shared_informer.go:262] Caches are synced for node config
	I0108 21:37:42.512803       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:37:42.513016       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [abdda2bcae93a4c9457bd4b491d97e1ceac603d4e346988062f43313f78e961c] <==
	* W0108 21:37:24.138917       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:37:24.138928       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:37:24.212018       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:37:24.212241       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:37:24.212341       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:37:24.212407       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:37:24.213805       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:37:24.213980       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:37:24.213838       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:37:24.214164       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:37:24.214612       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:37:24.214823       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:37:24.985444       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:37:24.985477       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:37:25.059921       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:37:25.059953       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:37:25.111960       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:37:25.111992       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:37:25.121962       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:37:25.121998       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:37:25.194792       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:37:25.194839       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:37:25.256188       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:37:25.256225       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0108 21:37:25.735198       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:32:49 UTC, end at Sun 2023-01-08 21:50:45 UTC. --
	Jan 08 21:49:07 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:07.273226    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:12 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:12.274212    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:17 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:17.274753    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:19 default-k8s-diff-port-211952 kubelet[3859]: I0108 21:49:19.921673    3859 scope.go:115] "RemoveContainer" containerID="ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074"
	Jan 08 21:49:19 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:19.922010    3859 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-8s5wp_kube-system(45060c63-d2ae-429a-b95f-cbbac924d3a9)\"" pod="kube-system/kindnet-8s5wp" podUID=45060c63-d2ae-429a-b95f-cbbac924d3a9
	Jan 08 21:49:22 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:22.275282    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:27 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:27.276969    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:32 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:32.278046    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:34 default-k8s-diff-port-211952 kubelet[3859]: I0108 21:49:34.921599    3859 scope.go:115] "RemoveContainer" containerID="ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074"
	Jan 08 21:49:34 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:34.921884    3859 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-8s5wp_kube-system(45060c63-d2ae-429a-b95f-cbbac924d3a9)\"" pod="kube-system/kindnet-8s5wp" podUID=45060c63-d2ae-429a-b95f-cbbac924d3a9
	Jan 08 21:49:37 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:37.279065    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:42 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:42.280458    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:46 default-k8s-diff-port-211952 kubelet[3859]: I0108 21:49:46.921246    3859 scope.go:115] "RemoveContainer" containerID="ad37848bda8447abbb65b4ea9f28c0bbcfe3b8d1ebfdd03ab56a79efe3b3b074"
	Jan 08 21:49:47 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:47.282739    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:52 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:52.284476    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:49:57 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:49:57.285417    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:02 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:02.286264    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:07 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:07.287702    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:12 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:12.288636    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:17 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:17.289658    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:22 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:22.291182    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:27 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:27.292448    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:32 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:32.293210    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:37 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:37.294366    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Jan 08 21:50:42 default-k8s-diff-port-211952 kubelet[3859]: E0108 21:50:42.295809    3859 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk: exit status 1 (61.832722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-vl6gh" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-mctg7" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5949f5c576-b87fb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-f87d45d87-bnlzk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-diff-port-211952 describe pod coredns-565d847f94-vl6gh metrics-server-5c8fd5cf8-mctg7 storage-provisioner dashboard-metrics-scraper-5949f5c576-b87fb kubernetes-dashboard-f87d45d87-bnlzk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.40s)

                                                
                                    

Test pass (228/268)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 30.86
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 23.71
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
18 TestDownloadOnlyKic 5.55
19 TestBinaryMirror 0.81
20 TestOffline 66.81
22 TestAddons/Setup 156.52
24 TestAddons/parallel/Registry 15.62
25 TestAddons/parallel/Ingress 22.15
26 TestAddons/parallel/MetricsServer 5.54
27 TestAddons/parallel/HelmTiller 13.82
29 TestAddons/parallel/CSI 38.22
30 TestAddons/parallel/Headlamp 11.04
31 TestAddons/parallel/CloudSpanner 5.32
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 20.19
36 TestCertOptions 32.61
37 TestCertExpiration 225.46
39 TestForceSystemdFlag 28.62
40 TestForceSystemdEnv 32.97
41 TestKVMDriverInstallOrUpdate 5.75
45 TestErrorSpam/setup 22.22
46 TestErrorSpam/start 0.91
47 TestErrorSpam/status 1.06
48 TestErrorSpam/pause 1.57
49 TestErrorSpam/unpause 1.52
50 TestErrorSpam/stop 1.47
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 45.47
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 15.47
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 4.14
62 TestFunctional/serial/CacheCmd/cache/add_local 2.04
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.13
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
70 TestFunctional/serial/ExtraConfig 38.05
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.11
73 TestFunctional/serial/LogsFileCmd 1.14
75 TestFunctional/parallel/ConfigCmd 0.51
76 TestFunctional/parallel/DashboardCmd 8.67
77 TestFunctional/parallel/DryRun 0.61
78 TestFunctional/parallel/InternationalLanguage 0.25
79 TestFunctional/parallel/StatusCmd 1.23
82 TestFunctional/parallel/ServiceCmd 10.08
83 TestFunctional/parallel/ServiceCmdConnect 8.81
84 TestFunctional/parallel/AddonsCmd 0.22
85 TestFunctional/parallel/PersistentVolumeClaim 31.57
87 TestFunctional/parallel/SSHCmd 0.81
88 TestFunctional/parallel/CpCmd 1.43
89 TestFunctional/parallel/MySQL 19.99
90 TestFunctional/parallel/FileSync 0.35
91 TestFunctional/parallel/CertSync 2.23
95 TestFunctional/parallel/NodeLabels 0.06
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
99 TestFunctional/parallel/License 0.27
100 TestFunctional/parallel/Version/short 0.08
101 TestFunctional/parallel/Version/components 0.56
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.48
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.44
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.4
107 TestFunctional/parallel/ImageCommands/Setup 1.49
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.05
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.38
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.06
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.64
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
125 TestFunctional/parallel/ProfileCmd/profile_list 0.47
126 TestFunctional/parallel/MountCmd/any-port 9.42
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.25
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.2
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
132 TestFunctional/parallel/MountCmd/specific-port 2.01
133 TestFunctional/delete_addon-resizer_images 0.08
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
138 TestIngressAddonLegacy/StartLegacyK8sCluster 80.85
140 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.66
141 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.43
142 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.21
145 TestJSONOutput/start/Command 42.87
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.67
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.6
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 5.72
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.27
170 TestKicCustomNetwork/create_custom_network 43.34
171 TestKicCustomNetwork/use_default_bridge_network 25.92
172 TestKicExistingNetwork 28.33
173 TestKicCustomSubnet 27.73
174 TestMainNoArgs 0.07
175 TestMinikubeProfile 49.85
178 TestMountStart/serial/StartWithMountFirst 5.01
179 TestMountStart/serial/VerifyMountFirst 0.32
180 TestMountStart/serial/StartWithMountSecond 5.19
181 TestMountStart/serial/VerifyMountSecond 0.32
182 TestMountStart/serial/DeleteFirst 1.73
183 TestMountStart/serial/VerifyMountPostDelete 0.32
184 TestMountStart/serial/Stop 1.23
185 TestMountStart/serial/RestartStopped 6.68
186 TestMountStart/serial/VerifyMountPostStop 0.31
189 TestMultiNode/serial/FreshStart2Nodes 89.76
190 TestMultiNode/serial/DeployApp2Nodes 4.5
191 TestMultiNode/serial/PingHostFrom2Pods 0.88
192 TestMultiNode/serial/AddNode 32.72
193 TestMultiNode/serial/ProfileList 0.35
194 TestMultiNode/serial/CopyFile 11.28
195 TestMultiNode/serial/StopNode 2.32
196 TestMultiNode/serial/StartAfterStop 30.84
197 TestMultiNode/serial/RestartKeepsNodes 154.39
198 TestMultiNode/serial/DeleteNode 4.91
199 TestMultiNode/serial/StopMultiNode 40.11
200 TestMultiNode/serial/RestartMultiNode 78.04
201 TestMultiNode/serial/ValidateNameConflict 27.51
208 TestScheduledStopUnix 99.22
211 TestInsufficientStorage 15.12
212 TestRunningBinaryUpgrade 103.84
215 TestMissingContainerUpgrade 140.56
217 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
221 TestStoppedBinaryUpgrade/Setup 3.05
222 TestNoKubernetes/serial/StartWithK8s 33.32
227 TestNetworkPlugins/group/false 0.5
231 TestStoppedBinaryUpgrade/Upgrade 157.25
232 TestNoKubernetes/serial/StartWithStopK8s 21.12
233 TestNoKubernetes/serial/Start 5.36
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
235 TestNoKubernetes/serial/ProfileList 2.34
236 TestNoKubernetes/serial/Stop 2.08
237 TestNoKubernetes/serial/StartNoArgs 6.62
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.41
239 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
248 TestPause/serial/Start 47.98
249 TestNetworkPlugins/group/auto/Start 55.82
250 TestPause/serial/SecondStartNoReconfiguration 15.83
251 TestPause/serial/Pause 0.77
252 TestPause/serial/VerifyStatus 0.36
253 TestPause/serial/Unpause 0.65
254 TestPause/serial/PauseAgain 0.8
255 TestNetworkPlugins/group/auto/KubeletFlags 0.37
256 TestNetworkPlugins/group/auto/NetCatPod 9.28
257 TestPause/serial/DeletePaused 2.95
258 TestPause/serial/VerifyDeletedResources 14.17
259 TestNetworkPlugins/group/auto/DNS 0.13
260 TestNetworkPlugins/group/auto/Localhost 0.12
261 TestNetworkPlugins/group/auto/HairPin 0.12
262 TestNetworkPlugins/group/kindnet/Start 57.22
263 TestNetworkPlugins/group/cilium/Start 91.52
265 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
266 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
267 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
268 TestNetworkPlugins/group/kindnet/DNS 0.16
269 TestNetworkPlugins/group/kindnet/Localhost 0.13
270 TestNetworkPlugins/group/kindnet/HairPin 0.14
271 TestNetworkPlugins/group/enable-default-cni/Start 36.69
272 TestNetworkPlugins/group/cilium/ControllerPod 5.02
273 TestNetworkPlugins/group/cilium/KubeletFlags 0.34
274 TestNetworkPlugins/group/cilium/NetCatPod 9.87
275 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
276 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
277 TestNetworkPlugins/group/cilium/DNS 0.13
278 TestNetworkPlugins/group/cilium/Localhost 0.13
279 TestNetworkPlugins/group/cilium/HairPin 0.13
280 TestNetworkPlugins/group/bridge/Start 38.37
282 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
283 TestNetworkPlugins/group/bridge/NetCatPod 9.23
290 TestStartStop/group/embed-certs/serial/FirstStart 45.6
293 TestStartStop/group/embed-certs/serial/DeployApp 8.3
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.7
295 TestStartStop/group/embed-certs/serial/Stop 20.09
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
297 TestStartStop/group/embed-certs/serial/SecondStart 315.1
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
302 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
303 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
304 TestStartStop/group/embed-certs/serial/Pause 2.98
306 TestStartStop/group/newest-cni/serial/FirstStart 46.85
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.6
309 TestStartStop/group/newest-cni/serial/Stop 1.27
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/newest-cni/serial/SecondStart 29.77
312 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
315 TestStartStop/group/newest-cni/serial/Pause 2.81
316 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.59
317 TestStartStop/group/old-k8s-version/serial/Stop 1.29
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
320 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.6
321 TestStartStop/group/no-preload/serial/Stop 1.27
322 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.58
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.27
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
x
+
TestDownloadOnly/v1.16.0/json-events (30.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-202717 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-202717 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (30.856450171s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (30.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-202717
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-202717: exit status 85 (84.37445ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-202717 | jenkins | v1.28.0 | 08 Jan 23 20:27 UTC |          |
	|         | -p download-only-202717        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 20:27:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:27:17.976708   10384 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:27:17.976871   10384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:27:17.976879   10384 out.go:309] Setting ErrFile to fd 2...
	I0108 20:27:17.976883   10384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:27:17.976970   10384 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	W0108 20:27:17.977070   10384 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3617/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3617/.minikube/config/config.json: no such file or directory
	I0108 20:27:17.977636   10384 out.go:303] Setting JSON to true
	I0108 20:27:17.978384   10384 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":587,"bootTime":1673209051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:27:17.978439   10384 start.go:135] virtualization: kvm guest
	I0108 20:27:17.981690   10384 out.go:97] [download-only-202717] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:27:17.981762   10384 notify.go:220] Checking for updates...
	I0108 20:27:17.983539   10384 out.go:169] MINIKUBE_LOCATION=15565
	W0108 20:27:17.981768   10384 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 20:27:17.986882   10384 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:27:17.988650   10384 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 20:27:17.990323   10384 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 20:27:17.991926   10384 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:27:17.994801   10384 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:27:17.994921   10384 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 20:27:18.020952   10384 docker.go:137] docker version: linux-20.10.22
	I0108 20:27:18.021013   10384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:27:18.862628   10384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-08 20:27:18.038166644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:27:18.862769   10384 docker.go:254] overlay module found
	I0108 20:27:18.864983   10384 out.go:97] Using the docker driver based on user configuration
	I0108 20:27:18.865000   10384 start.go:294] selected driver: docker
	I0108 20:27:18.865010   10384 start.go:838] validating driver "docker" against <nil>
	I0108 20:27:18.865107   10384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:27:18.963178   10384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-08 20:27:18.882383764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:27:18.963324   10384 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 20:27:18.963828   10384 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0108 20:27:18.963943   10384 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:27:18.966478   10384 out.go:169] Using Docker driver with root privileges
	I0108 20:27:18.968125   10384 cni.go:95] Creating CNI manager for ""
	I0108 20:27:18.968138   10384 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 20:27:18.968159   10384 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0108 20:27:18.968171   10384 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0108 20:27:18.968177   10384 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:27:18.968209   10384 start_flags.go:317] config:
	{Name:download-only-202717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-202717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:27:18.970066   10384 out.go:97] Starting control plane node download-only-202717 in cluster download-only-202717
	I0108 20:27:18.970087   10384 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 20:27:18.971666   10384 out.go:97] Pulling base image ...
	I0108 20:27:18.971686   10384 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 20:27:18.971789   10384 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 20:27:18.991373   10384 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0108 20:27:18.991675   10384 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0108 20:27:18.991774   10384 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0108 20:27:19.077408   10384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 20:27:19.077434   10384 cache.go:57] Caching tarball of preloaded images
	I0108 20:27:19.077604   10384 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 20:27:19.080218   10384 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:27:19.080241   10384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:27:19.187981   10384 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 20:27:32.602636   10384 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:27:32.602707   10384 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:27:33.480765   10384 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0108 20:27:33.481058   10384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/download-only-202717/config.json ...
	I0108 20:27:33.481085   10384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/download-only-202717/config.json: {Name:mkbf2d9a531fae7130feddecce5db8f8f6efd226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:33.481282   10384 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 20:27:33.481487   10384 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-202717"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (23.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-202717 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-202717 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (23.707986704s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (23.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-202717
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-202717: exit status 85 (83.306216ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-202717 | jenkins | v1.28.0 | 08 Jan 23 20:27 UTC |          |
	|         | -p download-only-202717        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-202717 | jenkins | v1.28.0 | 08 Jan 23 20:27 UTC |          |
	|         | -p download-only-202717        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 20:27:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:27:48.917588   10555 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:27:48.917701   10555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:27:48.917712   10555 out.go:309] Setting ErrFile to fd 2...
	I0108 20:27:48.917717   10555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:27:48.917854   10555 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	W0108 20:27:48.917985   10555 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3617/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3617/.minikube/config/config.json: no such file or directory
	I0108 20:27:48.918419   10555 out.go:303] Setting JSON to true
	I0108 20:27:48.919242   10555 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":618,"bootTime":1673209051,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:27:48.919298   10555 start.go:135] virtualization: kvm guest
	I0108 20:27:48.922046   10555 out.go:97] [download-only-202717] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:27:48.922136   10555 notify.go:220] Checking for updates...
	I0108 20:27:48.923703   10555 out.go:169] MINIKUBE_LOCATION=15565
	I0108 20:27:48.925486   10555 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:27:48.927297   10555 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 20:27:48.928832   10555 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 20:27:48.930617   10555 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:27:48.934393   10555 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:27:48.935102   10555 config.go:180] Loaded profile config "download-only-202717": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0108 20:27:48.935142   10555 start.go:746] api.Load failed for download-only-202717: filestore "download-only-202717": Docker machine "download-only-202717" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:27:48.935187   10555 driver.go:365] Setting default libvirt URI to qemu:///system
	W0108 20:27:48.935229   10555 start.go:746] api.Load failed for download-only-202717: filestore "download-only-202717": Docker machine "download-only-202717" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:27:48.960672   10555 docker.go:137] docker version: linux-20.10.22
	I0108 20:27:48.960744   10555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:27:49.049741   10555 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-08 20:27:48.978095993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:27:49.049832   10555 docker.go:254] overlay module found
	I0108 20:27:49.051730   10555 out.go:97] Using the docker driver based on existing profile
	I0108 20:27:49.051747   10555 start.go:294] selected driver: docker
	I0108 20:27:49.051752   10555 start.go:838] validating driver "docker" against &{Name:download-only-202717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-202717 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:27:49.051880   10555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:27:49.141341   10555 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-08 20:27:49.068866054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:27:49.141845   10555 cni.go:95] Creating CNI manager for ""
	I0108 20:27:49.141859   10555 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0108 20:27:49.141871   10555 start_flags.go:317] config:
	{Name:download-only-202717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-202717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:27:49.144021   10555 out.go:97] Starting control plane node download-only-202717 in cluster download-only-202717
	I0108 20:27:49.144047   10555 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0108 20:27:49.145627   10555 out.go:97] Pulling base image ...
	I0108 20:27:49.145655   10555 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 20:27:49.145759   10555 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 20:27:49.164956   10555 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0108 20:27:49.165179   10555 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0108 20:27:49.165199   10555 image.go:63] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory, skipping pull
	I0108 20:27:49.165204   10555 image.go:102] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in cache, skipping pull
	I0108 20:27:49.165218   10555 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c as a tarball
	I0108 20:27:49.479010   10555 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0108 20:27:49.479057   10555 cache.go:57] Caching tarball of preloaded images
	I0108 20:27:49.479278   10555 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0108 20:27:49.481418   10555 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0108 20:27:49.481438   10555 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I0108 20:27:49.591289   10555 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-202717"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-202717
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (5.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-202813 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-202813 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (4.096769053s)
helpers_test.go:175: Cleaning up "download-docker-202813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-202813
--- PASS: TestDownloadOnlyKic (5.55s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-202818 --alsologtostderr --binary-mirror http://127.0.0.1:33347 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-202818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-202818
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (66.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-210618 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-210618 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m4.412907151s)
helpers_test.go:175: Cleaning up "offline-containerd-210618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-210618

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-210618: (2.398467664s)
--- PASS: TestOffline (66.81s)

                                                
                                    
x
+
TestAddons/Setup (156.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p addons-202819 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p addons-202819 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m36.521932143s)
--- PASS: TestAddons/Setup (156.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 11.188158ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-thhvn" [4431655d-6f97-44a2-a269-f7c5cd0b56b4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007643929s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-7n9mx" [c1fec8e0-5c46-4e8f-bc31-e034f1bf2485] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006526796s
addons_test.go:297: (dbg) Run:  kubectl --context addons-202819 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-202819 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-202819 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.57392759s)
addons_test.go:316: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.62s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-202819 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:189: (dbg) Run:  kubectl --context addons-202819 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context addons-202819 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [2b98aadd-6b78-47f4-821b-7a34bf0ff80a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [2b98aadd-6b78-47f4-821b-7a34bf0ff80a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005704554s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context addons-202819 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable ingress --alsologtostderr -v=1
2023/01/08 20:31:10 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p addons-202819 addons disable ingress --alsologtostderr -v=1: (7.485504583s)
--- PASS: TestAddons/parallel/Ingress (22.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 2.7217ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-rrgln" [9accd7ac-031e-4a70-9fc2-98aa0d0b467c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009191154s
addons_test.go:372: (dbg) Run:  kubectl --context addons-202819 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 1.943949ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-5mjbw" [be30dca3-232e-4d94-b9f8-2e1d7effe132] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008652152s
addons_test.go:430: (dbg) Run:  kubectl --context addons-202819 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-202819 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.351622767s)
addons_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 4.943804ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-202819 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-202819 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-202819 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [56de0e8a-1112-4449-90f0-fcba7f34663f] Pending
helpers_test.go:342: "task-pv-pod" [56de0e8a-1112-4449-90f0-fcba7f34663f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [56de0e8a-1112-4449-90f0-fcba7f34663f] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.00842028s
addons_test.go:541: (dbg) Run:  kubectl --context addons-202819 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-202819 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-202819 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-202819 delete pod task-pv-pod
addons_test.go:557: (dbg) Run:  kubectl --context addons-202819 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-202819 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-202819 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-202819 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [a0f46041-77c8-4870-bbca-9c4122720667] Pending
helpers_test.go:342: "task-pv-pod-restore" [a0f46041-77c8-4870-bbca-9c4122720667] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [a0f46041-77c8-4870-bbca-9c4122720667] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.007097829s
addons_test.go:583: (dbg) Run:  kubectl --context addons-202819 delete pod task-pv-pod-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-202819 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-202819 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-linux-amd64 -p addons-202819 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.634479997s)
addons_test.go:599: (dbg) Run:  out/minikube-linux-amd64 -p addons-202819 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-202819 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-202819 --alsologtostderr -v=1: (1.035829751s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-cz8ws" [7628c7a4-9374-4e2a-bed0-88091dff2d8a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-cz8ws" [7628c7a4-9374-4e2a-bed0-88091dff2d8a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.005885332s
--- PASS: TestAddons/parallel/Headlamp (11.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-ccrzn" [edfa5558-d58b-4f24-a665-9c4baacea3ff] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005884557s
addons_test.go:798: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-202819
--- PASS: TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-202819 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-202819 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-202819
addons_test.go:139: (dbg) Done: out/minikube-linux-amd64 stop -p addons-202819: (19.99441444s)
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-202819
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-202819
--- PASS: TestAddons/StoppedEnableDisable (20.19s)

                                                
                                    
x
+
TestCertOptions (32.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-210727 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-210727 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (26.828705006s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-210727 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-210727 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-210727 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-210727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-210727
E0108 21:07:57.124957   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-210727: (4.970684912s)
--- PASS: TestCertOptions (32.61s)

                                                
                                    
x
+
TestCertExpiration (225.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-210725 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-210725 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.483340938s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-210725 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-210725 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (16.025810601s)
helpers_test.go:175: Cleaning up "cert-expiration-210725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-210725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-210725: (2.950269514s)
--- PASS: TestCertExpiration (225.46s)

                                                
                                    
x
+
TestForceSystemdFlag (28.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-210658 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-210658 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.125042393s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-210658 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-210658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-210658

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-210658: (2.073866307s)
--- PASS: TestForceSystemdFlag (28.62s)

                                                
                                    
x
+
TestForceSystemdEnv (32.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-210625 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0108 21:06:38.422783   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-210625 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.020767556s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-210625 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-210625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-210625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-210625: (3.476610864s)
--- PASS: TestForceSystemdEnv (32.97s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.75s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.75s)

                                                
                                    
x
+
TestErrorSpam/setup (22.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-204032 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-204032 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-204032 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-204032 --driver=docker  --container-runtime=containerd: (22.21712246s)
--- PASS: TestErrorSpam/setup (22.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 status
E0108 20:40:56.111826   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:40:56.117450   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:40:56.127691   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:40:56.147940   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:40:56.188237   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 status
E0108 20:40:56.269281   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:40:56.429632   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 status
E0108 20:40:56.750375   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 pause
E0108 20:40:57.390892   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 unpause
E0108 20:40:58.671405   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 stop
E0108 20:41:01.231655   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 stop: (1.234033379s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-204032 --log_dir /tmp/nospam-204032 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/test/nested/copy/10372/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204106 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0108 20:41:16.592458   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:41:37.072973   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-204106 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.473259868s)
--- PASS: TestFunctional/serial/StartWithProxy (45.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204106 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-204106 --alsologtostderr -v=8: (15.470726241s)
functional_test.go:656: soft start took 15.471387639s for "functional-204106" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-204106 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 cache add k8s.gcr.io/pause:3.1: (1.538903513s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 cache add k8s.gcr.io/pause:3.3: (1.534624547s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 cache add k8s.gcr.io/pause:latest: (1.062741966s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-204106 /tmp/TestFunctionalserialCacheCmdcacheadd_local1660860261/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cache add minikube-local-cache-test:functional-204106
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 cache add minikube-local-cache-test:functional-204106: (1.790236528s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cache delete minikube-local-cache-test:functional-204106
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-204106
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (323.998341ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 cache reload: (1.033188241s)
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 kubectl -- --context functional-204106 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-204106 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204106 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 20:42:18.034027   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-204106 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.051959913s)
functional_test.go:754: restart took 38.05216303s for "functional-204106" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-204106 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 logs: (1.11064341s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 logs --file /tmp/TestFunctionalserialLogsFileCmd855341157/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 logs --file /tmp/TestFunctionalserialLogsFileCmd855341157/001/logs.txt: (1.135432444s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 config get cpus: exit status 14 (81.909022ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 config get cpus: exit status 14 (78.438327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-204106 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-204106 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 46696: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-204106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (225.015427ms)

                                                
                                                
-- stdout --
	* [functional-204106] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:43:11.252472   45726 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:43:11.252667   45726 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:43:11.252677   45726 out.go:309] Setting ErrFile to fd 2...
	I0108 20:43:11.252681   45726 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:43:11.252801   45726 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 20:43:11.253312   45726 out.go:303] Setting JSON to false
	I0108 20:43:11.254417   45726 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1541,"bootTime":1673209051,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:43:11.254477   45726 start.go:135] virtualization: kvm guest
	I0108 20:43:11.257391   45726 out.go:177] * [functional-204106] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:43:11.259113   45726 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 20:43:11.259064   45726 notify.go:220] Checking for updates...
	I0108 20:43:11.260669   45726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:43:11.262459   45726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 20:43:11.264058   45726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 20:43:11.265557   45726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:43:11.267615   45726 config.go:180] Loaded profile config "functional-204106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 20:43:11.268088   45726 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 20:43:11.296505   45726 docker.go:137] docker version: linux-20.10.22
	I0108 20:43:11.296615   45726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:43:11.394132   45726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:40 SystemTime:2023-01-08 20:43:11.315628149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:43:11.394236   45726 docker.go:254] overlay module found
	I0108 20:43:11.398315   45726 out.go:177] * Using the docker driver based on existing profile
	I0108 20:43:11.399862   45726 start.go:294] selected driver: docker
	I0108 20:43:11.399883   45726 start.go:838] validating driver "docker" against &{Name:functional-204106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-204106 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:43:11.400069   45726 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:43:11.402939   45726 out.go:177] 
	W0108 20:43:11.404698   45726 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 20:43:11.406468   45726 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204106 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-204106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-204106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (254.584242ms)

                                                
                                                
-- stdout --
	* [functional-204106] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:43:11.865884   46111 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:43:11.866066   46111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:43:11.866083   46111 out.go:309] Setting ErrFile to fd 2...
	I0108 20:43:11.866091   46111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:43:11.866348   46111 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 20:43:11.867081   46111 out.go:303] Setting JSON to false
	I0108 20:43:11.868441   46111 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1541,"bootTime":1673209051,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:43:11.868517   46111 start.go:135] virtualization: kvm guest
	I0108 20:43:11.871403   46111 out.go:177] * [functional-204106] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 20:43:11.872977   46111 notify.go:220] Checking for updates...
	I0108 20:43:11.874651   46111 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 20:43:11.876171   46111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:43:11.877864   46111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 20:43:11.879719   46111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 20:43:11.881270   46111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:43:11.883335   46111 config.go:180] Loaded profile config "functional-204106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 20:43:11.883899   46111 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 20:43:11.918597   46111 docker.go:137] docker version: linux-20.10.22
	I0108 20:43:11.918689   46111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:43:12.032833   46111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-01-08 20:43:11.944122759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:43:12.032963   46111 docker.go:254] overlay module found
	I0108 20:43:12.036694   46111 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 20:43:12.038202   46111 start.go:294] selected driver: docker
	I0108 20:43:12.038223   46111 start.go:838] validating driver "docker" against &{Name:functional-204106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-204106 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 20:43:12.038347   46111 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:43:12.040704   46111 out.go:177] 
	W0108 20:43:12.042133   46111 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 20:43:12.043764   46111 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (10.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-204106 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-204106 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-mfhb7" [e46ca151-ab05-4be8-b6ea-991736c0c9ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-mfhb7" [e46ca151-ab05-4be8-b6ea-991736c0c9ad] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.012281808s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.49.2:32595
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:32595
--- PASS: TestFunctional/parallel/ServiceCmd (10.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-204106 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-204106 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-qkxnq" [ba3b899a-e458-4356-83d7-ecf9f8495b15] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-qkxnq" [ba3b899a-e458-4356-83d7-ecf9f8495b15] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-qkxnq" [ba3b899a-e458-4356-83d7-ecf9f8495b15] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005833639s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:31706
functional_test.go:1605: http://192.168.49.2:31706: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-qkxnq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31706
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [23ca6ffe-0ebf-43e4-8d75-5f89f7893c4f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007158316s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-204106 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-204106 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-204106 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-204106 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [be9e21ab-db6b-430c-9f12-0f21a45418ce] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [be9e21ab-db6b-430c-9f12-0f21a45418ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [be9e21ab-db6b-430c-9f12-0f21a45418ce] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.067595863s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-204106 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-204106 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-204106 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b617fde5-2dbc-43a6-bfbb-d1e4571b093f] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b617fde5-2dbc-43a6-bfbb-d1e4571b093f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b617fde5-2dbc-43a6-bfbb-d1e4571b093f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005823445s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-204106 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh -n functional-204106 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 cp functional-204106:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1291119147/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh -n functional-204106 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-204106 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-4cfs2" [de8cd3ee-c6d6-4ba2-9724-f664986b71d2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-4cfs2" [de8cd3ee-c6d6-4ba2-9724-f664986b71d2] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.009965508s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-204106 exec mysql-596b7fcdbf-4cfs2 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-204106 exec mysql-596b7fcdbf-4cfs2 -- mysql -ppassword -e "show databases;": exit status 1 (109.199516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-204106 exec mysql-596b7fcdbf-4cfs2 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-204106 exec mysql-596b7fcdbf-4cfs2 -- mysql -ppassword -e "show databases;": exit status 1 (120.995117ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 20:43:39.954562   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
functional_test.go:1734: (dbg) Run:  kubectl --context functional-204106 exec mysql-596b7fcdbf-4cfs2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10372/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /etc/test/nested/copy/10372/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10372.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /etc/ssl/certs/10372.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10372.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /usr/share/ca-certificates/10372.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/103722.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /etc/ssl/certs/103722.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/103722.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /usr/share/ca-certificates/103722.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-204106 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh "sudo systemctl is-active docker": exit status 1 (377.363682ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh "sudo systemctl is-active crio": exit status 1 (344.299482ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204106 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-204106
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-204106
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204106 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
| docker.io/library/minikube-local-cache-test | functional-204106  | sha256:f4b51c | 1.74kB |
| docker.io/library/nginx                     | alpine             | sha256:1e4154 | 16.7MB |
| docker.io/library/nginx                     | latest             | sha256:1403e5 | 56.9MB |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/google-containers/addon-resizer      | functional-204106  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204106 image ls --format json:
[{"id":"sha256:f4b51c80e2d00b968df4ab83780ce62be84c39cec7d19699e40eced0ad0fc8e6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-204106"],"size":"1738"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da
86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"},{"id":"sha256:1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52","repoDigests":["docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286"],"repoTags":["docker.io/library/nginx:latest"],"size":"56882284"},{"id":"sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io
/kube-apiserver:v1.25.3"],"size":"34238163"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"20265805"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:1e415454686a67ed83fb7aaa41acb2472e7457061bcdbbf0f5143d7a1a89b36c","repoDigests":["docker.io/library/nginx@sha256:dd8a054d7ef030e94a6449783605d6c30
6c1f69c10c2fa06b66a030e0d1db793"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16678454"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-204106"],"size":"10823156"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-204106 image ls --format yaml:
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:1e415454686a67ed83fb7aaa41acb2472e7457061bcdbbf0f5143d7a1a89b36c
repoDigests:
- docker.io/library/nginx@sha256:dd8a054d7ef030e94a6449783605d6c306c1f69c10c2fa06b66a030e0d1db793
repoTags:
- docker.io/library/nginx:alpine
size: "16678454"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-204106
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:f4b51c80e2d00b968df4ab83780ce62be84c39cec7d19699e40eced0ad0fc8e6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-204106
size: "1738"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52
repoDigests:
- docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286
repoTags:
- docker.io/library/nginx:latest
size: "56882284"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh pgrep buildkitd: exit status 1 (312.989012ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image build -t localhost/my-image:functional-204106 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 image build -t localhost/my-image:functional-204106 testdata/build: (3.817837365s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-204106 image build -t localhost/my-image:functional-204106 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.3s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:342f65fbc66e18a1b9e11b3993a57fef879c269b75e5497d90da4d44f6d3fd71 done
#8 exporting config sha256:ed4315ef2b35a5b8f7fd59781abf851f248b6bb99fa06c99f5ad0639f08f17a6 0.0s done
#8 naming to localhost/my-image:functional-204106 done
#8 DONE 0.1s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.462688279s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-204106
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image load --daemon gcr.io/google-containers/addon-resizer:functional-204106

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 image load --daemon gcr.io/google-containers/addon-resizer:functional-204106: (3.786030056s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-204106 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-204106 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [54c8e746-4b20-4943-a9fc-b11710305e53] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [54c8e746-4b20-4943-a9fc-b11710305e53] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009090989s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image load --daemon gcr.io/google-containers/addon-resizer:functional-204106

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 image load --daemon gcr.io/google-containers/addon-resizer:functional-204106: (3.750710889s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.411157851s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-204106
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image load --daemon gcr.io/google-containers/addon-resizer:functional-204106

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 image load --daemon gcr.io/google-containers/addon-resizer:functional-204106: (3.953235571s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-204106 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.102.216.252 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-204106 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "389.917304ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "78.253018ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204106 /tmp/TestFunctionalparallelMountCmdany-port1864673157/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673210590703082541" to /tmp/TestFunctionalparallelMountCmdany-port1864673157/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673210590703082541" to /tmp/TestFunctionalparallelMountCmdany-port1864673157/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673210590703082541" to /tmp/TestFunctionalparallelMountCmdany-port1864673157/001/test-1673210590703082541
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.703654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:43 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:43 test-1673210590703082541
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh cat /mount-9p/test-1673210590703082541

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-204106 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [bb54c7ce-fca6-47c4-9161-05ea3537938a] Pending
helpers_test.go:342: "busybox-mount" [bb54c7ce-fca6-47c4-9161-05ea3537938a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [bb54c7ce-fca6-47c4-9161-05ea3537938a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [bb54c7ce-fca6-47c4-9161-05ea3537938a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005712334s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-204106 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204106 /tmp/TestFunctionalparallelMountCmdany-port1864673157/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "401.676029ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "78.650623ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image save gcr.io/google-containers/addon-resizer:functional-204106 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-204106 image save gcr.io/google-containers/addon-resizer:functional-204106 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.251148248s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image rm gcr.io/google-containers/addon-resizer:functional-204106
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-204106
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 image save --daemon gcr.io/google-containers/addon-resizer:functional-204106

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-204106
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-204106 /tmp/TestFunctionalparallelMountCmdspecific-port3105714424/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.839134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/01/08 20:43:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204106 /tmp/TestFunctionalparallelMountCmdspecific-port3105714424/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-204106 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-204106 ssh "sudo umount -f /mount-9p": exit status 1 (390.20957ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-204106 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-204106 /tmp/TestFunctionalparallelMountCmdspecific-port3105714424/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-204106
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-204106
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-204106
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-204344 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-204344 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m20.85328842s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons enable ingress --alsologtostderr -v=5: (9.656596311s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-204344 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-204344 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.120412749s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-204344 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-204344 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [892c6473-270d-43a0-ab80-4c7831b9a5cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [892c6473-270d-43a0-ab80-4c7831b9a5cd] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.005550589s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-204344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-204344 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-204344 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons disable ingress-dns --alsologtostderr -v=1: (14.614279325s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons disable ingress --alsologtostderr -v=1
E0108 20:45:56.112475   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-204344 addons disable ingress --alsologtostderr -v=1: (7.22922453s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.21s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-204605 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0108 20:46:23.794710   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-204605 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (42.867645014s)
--- PASS: TestJSONOutput/start/Command (42.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-204605 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-204605 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-204605 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-204605 --output=json --user=testUser: (5.722423816s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-204700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-204700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.654967ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"930f97ef-bc2a-4a8f-9dd3-3a897315a699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-204700] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3160f672-14c9-41d3-bffd-5c994eff3f02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"7be85c8a-ada6-4396-b946-bd9adc6b30f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c8dfe2e-952a-4c77-8e8e-1c7313d54354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig"}}
	{"specversion":"1.0","id":"80f3206b-2ab1-4d73-813c-55d3ae7dc83f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube"}}
	{"specversion":"1.0","id":"7032dd47-0485-4ef2-9139-29619ad1a7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e3ba124c-5aef-44a8-a57e-61c35033c1e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-204700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-204700
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-204700 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-204700 --network=: (41.173164041s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-204700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-204700
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-204700: (2.13890977s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-204743 --network=bridge
E0108 20:47:57.125403   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.130663   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.140904   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.161156   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.201423   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.281768   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.442152   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:57.762701   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:58.403356   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:47:59.683592   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:48:02.244701   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:48:07.364864   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-204743 --network=bridge: (23.915297233s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-204743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-204743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-204743: (1.985169877s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.92s)

                                                
                                    
x
+
TestKicExistingNetwork (28.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-204809 --network=existing-network
E0108 20:48:17.605439   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-204809 --network=existing-network: (26.175611964s)
helpers_test.go:175: Cleaning up "existing-network-204809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-204809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-204809: (1.995714207s)
--- PASS: TestKicExistingNetwork (28.33s)

                                                
                                    
x
+
TestKicCustomSubnet (27.73s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-204838 --subnet=192.168.60.0/24
E0108 20:48:38.085978   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-204838 --subnet=192.168.60.0/24: (25.4540033s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-204838 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-204838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-204838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-204838: (2.256004432s)
--- PASS: TestKicCustomSubnet (27.73s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (49.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-204905 --driver=docker  --container-runtime=containerd
E0108 20:49:19.046187   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-204905 --driver=docker  --container-runtime=containerd: (22.777055336s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-204905 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-204905 --driver=docker  --container-runtime=containerd: (21.70892959s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-204905
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-204905
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-204905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-204905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-204905: (1.8959448s)
helpers_test.go:175: Cleaning up "first-204905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-204905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-204905: (2.280024295s)
--- PASS: TestMinikubeProfile (49.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-204955 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-204955 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.007972987s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-204955 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-204955 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-204955 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.185271555s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-204955 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-204955 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-204955 --alsologtostderr -v=5: (1.72730043s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-204955 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-204955
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-204955: (1.23087558s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-204955
E0108 20:50:15.378737   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:15.384066   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:15.394343   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:15.414607   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:15.454872   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-204955: (5.681271597s)
E0108 20:50:15.535053   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:15.695445   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:16.016043   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (6.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-204955 ssh -- ls /minikube-host
E0108 20:50:16.656520   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-205018 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0108 20:50:20.498188   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:25.619070   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:35.859877   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:50:40.966355   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:50:56.111864   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 20:50:56.340690   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:51:37.301164   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-205018 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m29.236037125s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-205018 -- rollout status deployment/busybox: (2.776774745s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-hcx7s -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-lqc7k -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-hcx7s -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-lqc7k -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-hcx7s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-lqc7k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-hcx7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-hcx7s -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-lqc7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-205018 -- exec busybox-65db55d5d6-lqc7k -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-205018 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-205018 -v 3 --alsologtostderr: (32.036527864s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.72s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp testdata/cp-test.txt multinode-205018:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1325360011/001/cp-test_multinode-205018.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018:/home/docker/cp-test.txt multinode-205018-m02:/home/docker/cp-test_multinode-205018_multinode-205018-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m02 "sudo cat /home/docker/cp-test_multinode-205018_multinode-205018-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018:/home/docker/cp-test.txt multinode-205018-m03:/home/docker/cp-test_multinode-205018_multinode-205018-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m03 "sudo cat /home/docker/cp-test_multinode-205018_multinode-205018-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp testdata/cp-test.txt multinode-205018-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1325360011/001/cp-test_multinode-205018-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018-m02:/home/docker/cp-test.txt multinode-205018:/home/docker/cp-test_multinode-205018-m02_multinode-205018.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018 "sudo cat /home/docker/cp-test_multinode-205018-m02_multinode-205018.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018-m02:/home/docker/cp-test.txt multinode-205018-m03:/home/docker/cp-test_multinode-205018-m02_multinode-205018-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m03 "sudo cat /home/docker/cp-test_multinode-205018-m02_multinode-205018-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp testdata/cp-test.txt multinode-205018-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1325360011/001/cp-test_multinode-205018-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt multinode-205018:/home/docker/cp-test_multinode-205018-m03_multinode-205018.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018 "sudo cat /home/docker/cp-test_multinode-205018-m03_multinode-205018.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt multinode-205018-m02:/home/docker/cp-test_multinode-205018-m03_multinode-205018-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 ssh -n multinode-205018-m02 "sudo cat /home/docker/cp-test_multinode-205018-m03_multinode-205018-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-205018 node stop m03: (1.243324631s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-205018 status: exit status 7 (544.789426ms)

                                                
                                                
-- stdout --
	multinode-205018
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-205018-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-205018-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr: exit status 7 (531.610576ms)

                                                
                                                
-- stdout --
	multinode-205018
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-205018-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-205018-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:52:39.996812  102177 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:52:39.996941  102177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:52:39.996952  102177 out.go:309] Setting ErrFile to fd 2...
	I0108 20:52:39.996962  102177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:52:39.997077  102177 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 20:52:39.997247  102177 out.go:303] Setting JSON to false
	I0108 20:52:39.997279  102177 mustload.go:65] Loading cluster: multinode-205018
	I0108 20:52:39.997354  102177 notify.go:220] Checking for updates...
	I0108 20:52:39.997631  102177 config.go:180] Loaded profile config "multinode-205018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 20:52:39.997647  102177 status.go:255] checking status of multinode-205018 ...
	I0108 20:52:39.998038  102177 cli_runner.go:164] Run: docker container inspect multinode-205018 --format={{.State.Status}}
	I0108 20:52:40.022184  102177 status.go:330] multinode-205018 host status = "Running" (err=<nil>)
	I0108 20:52:40.022207  102177 host.go:66] Checking if "multinode-205018" exists ...
	I0108 20:52:40.022409  102177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-205018
	I0108 20:52:40.045191  102177 host.go:66] Checking if "multinode-205018" exists ...
	I0108 20:52:40.045441  102177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:52:40.045486  102177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-205018
	I0108 20:52:40.068655  102177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32842 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/multinode-205018/id_rsa Username:docker}
	I0108 20:52:40.147796  102177 ssh_runner.go:195] Run: systemctl --version
	I0108 20:52:40.151363  102177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:52:40.160055  102177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:52:40.252047  102177 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-08 20:52:40.179361729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:52:40.252568  102177 kubeconfig.go:92] found "multinode-205018" server: "https://192.168.58.2:8443"
	I0108 20:52:40.252588  102177 api_server.go:165] Checking apiserver status ...
	I0108 20:52:40.252614  102177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:52:40.261246  102177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1208/cgroup
	I0108 20:52:40.268174  102177 api_server.go:181] apiserver freezer: "12:freezer:/docker/d5f4ae8c3011e4c9e28fa917255862f745a9be22f04ddef17a6e8dee9556e0f7/kubepods/burstable/pod09e1c4be63f51e05881a657f3a5de1e6/d5684fa1c4c2ab30df2d621101b5a9deb15c05dfcca8078b8db8e28dd5fe53db"
	I0108 20:52:40.268233  102177 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d5f4ae8c3011e4c9e28fa917255862f745a9be22f04ddef17a6e8dee9556e0f7/kubepods/burstable/pod09e1c4be63f51e05881a657f3a5de1e6/d5684fa1c4c2ab30df2d621101b5a9deb15c05dfcca8078b8db8e28dd5fe53db/freezer.state
	I0108 20:52:40.274493  102177 api_server.go:203] freezer state: "THAWED"
	I0108 20:52:40.274527  102177 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 20:52:40.278887  102177 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 20:52:40.278906  102177 status.go:421] multinode-205018 apiserver status = Running (err=<nil>)
	I0108 20:52:40.278915  102177 status.go:257] multinode-205018 status: &{Name:multinode-205018 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:52:40.278934  102177 status.go:255] checking status of multinode-205018-m02 ...
	I0108 20:52:40.279161  102177 cli_runner.go:164] Run: docker container inspect multinode-205018-m02 --format={{.State.Status}}
	I0108 20:52:40.302255  102177 status.go:330] multinode-205018-m02 host status = "Running" (err=<nil>)
	I0108 20:52:40.302275  102177 host.go:66] Checking if "multinode-205018-m02" exists ...
	I0108 20:52:40.302534  102177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-205018-m02
	I0108 20:52:40.324712  102177 host.go:66] Checking if "multinode-205018-m02" exists ...
	I0108 20:52:40.324932  102177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:52:40.324968  102177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-205018-m02
	I0108 20:52:40.347104  102177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/multinode-205018-m02/id_rsa Username:docker}
	I0108 20:52:40.427735  102177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:52:40.436216  102177 status.go:257] multinode-205018-m02 status: &{Name:multinode-205018-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:52:40.436245  102177 status.go:255] checking status of multinode-205018-m03 ...
	I0108 20:52:40.436480  102177 cli_runner.go:164] Run: docker container inspect multinode-205018-m03 --format={{.State.Status}}
	I0108 20:52:40.460701  102177 status.go:330] multinode-205018-m03 host status = "Stopped" (err=<nil>)
	I0108 20:52:40.460728  102177 status.go:343] host is not running, skipping remaining checks
	I0108 20:52:40.460736  102177 status.go:257] multinode-205018-m03 status: &{Name:multinode-205018-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 node start m03 --alsologtostderr
E0108 20:52:57.125348   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 20:52:59.221504   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-205018 node start m03 --alsologtostderr: (30.065048303s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (154.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-205018
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-205018
E0108 20:53:24.807208   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-205018: (40.951157185s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-205018 --wait=true -v=8 --alsologtostderr
E0108 20:55:15.379538   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 20:55:43.062145   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-205018 --wait=true -v=8 --alsologtostderr: (1m53.29426421s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-205018
--- PASS: TestMultiNode/serial/RestartKeepsNodes (154.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-205018 node delete m03: (4.242225203s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 stop
E0108 20:55:56.112338   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-205018 stop: (39.875209093s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-205018 status: exit status 7 (113.387757ms)

                                                
                                                
-- stdout --
	multinode-205018
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-205018-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr: exit status 7 (118.671903ms)

                                                
                                                
-- stdout --
	multinode-205018
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-205018-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:56:30.651952  112762 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:56:30.652131  112762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:56:30.652142  112762 out.go:309] Setting ErrFile to fd 2...
	I0108 20:56:30.652149  112762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:56:30.652315  112762 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 20:56:30.652551  112762 out.go:303] Setting JSON to false
	I0108 20:56:30.652581  112762 mustload.go:65] Loading cluster: multinode-205018
	I0108 20:56:30.652665  112762 notify.go:220] Checking for updates...
	I0108 20:56:30.652996  112762 config.go:180] Loaded profile config "multinode-205018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 20:56:30.653011  112762 status.go:255] checking status of multinode-205018 ...
	I0108 20:56:30.653387  112762 cli_runner.go:164] Run: docker container inspect multinode-205018 --format={{.State.Status}}
	I0108 20:56:30.681873  112762 status.go:330] multinode-205018 host status = "Stopped" (err=<nil>)
	I0108 20:56:30.681893  112762 status.go:343] host is not running, skipping remaining checks
	I0108 20:56:30.681899  112762 status.go:257] multinode-205018 status: &{Name:multinode-205018 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:56:30.681920  112762 status.go:255] checking status of multinode-205018-m02 ...
	I0108 20:56:30.682221  112762 cli_runner.go:164] Run: docker container inspect multinode-205018-m02 --format={{.State.Status}}
	I0108 20:56:30.703391  112762 status.go:330] multinode-205018-m02 host status = "Stopped" (err=<nil>)
	I0108 20:56:30.703411  112762 status.go:343] host is not running, skipping remaining checks
	I0108 20:56:30.703416  112762 status.go:257] multinode-205018-m02 status: &{Name:multinode-205018-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-205018 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0108 20:57:19.155150   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-205018 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.371293888s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-205018 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-205018
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-205018-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-205018-m02 --driver=docker  --container-runtime=containerd: exit status 14 (97.183363ms)

                                                
                                                
-- stdout --
	* [multinode-205018-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-205018-m02' is duplicated with machine name 'multinode-205018-m02' in profile 'multinode-205018'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-205018-m03 --driver=docker  --container-runtime=containerd
E0108 20:57:57.125111   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-205018-m03 --driver=docker  --container-runtime=containerd: (25.101350559s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-205018
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-205018: exit status 80 (324.9235ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-205018
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-205018-m03 already exists in multinode-205018-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-205018-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-205018-m03: (1.923327298s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.51s)

                                                
                                    
x
+
TestScheduledStopUnix (99.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-210424 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-210424 --memory=2048 --driver=docker  --container-runtime=containerd: (22.615129715s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-210424 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-210424 -n scheduled-stop-210424
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-210424 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-210424 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-210424 -n scheduled-stop-210424
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-210424
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-210424 --schedule 15s
E0108 21:05:15.378806   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 21:05:56.111592   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-210424
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-210424: exit status 7 (89.364609ms)

                                                
                                                
-- stdout --
	scheduled-stop-210424
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-210424 -n scheduled-stop-210424
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-210424 -n scheduled-stop-210424: exit status 7 (86.868697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-210424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-210424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-210424: (4.927747349s)
--- PASS: TestScheduledStopUnix (99.22s)

                                                
                                    
x
+
TestInsufficientStorage (15.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-210603 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-210603 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.593172105s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c2277170-b6bf-4011-991f-20522a645eca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-210603] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2516d7d2-aa43-4b90-a2a2-a7d7bde30d7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"aafb7a18-0ab4-4909-ae19-ec20e8a1fb45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce475d0b-1247-4fd9-8b60-156223ea323f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig"}}
	{"specversion":"1.0","id":"5fcf94e7-02a6-4b28-bc0e-c62aee5f5822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube"}}
	{"specversion":"1.0","id":"4a6bb9ac-83ee-4523-b154-b6496bea63c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"54f3eaae-3f61-4faf-a869-130dc33186da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3eba38f6-2e4a-46f1-9a48-d4acf440117c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"263397ad-d6eb-42db-aac3-252e1d4cd05d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"56f6790f-0c56-44e7-b3c6-70709578dec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"738c3220-6835-43c3-995c-ce93c6c74054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-210603 in cluster insufficient-storage-210603","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5e9c805-3f02-473c-ab57-6e08bf540dac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"59ec5fb5-ae39-427a-bdd1-b0dc1648698c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fe6b479-75cd-4054-b543-9e52c8939124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-210603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-210603 --output=json --layout=cluster: exit status 7 (324.543722ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-210603","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-210603","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:06:12.595799  135877 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-210603" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-210603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-210603 --output=json --layout=cluster: exit status 7 (322.844032ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-210603","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-210603","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:06:12.919291  135986 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-210603" does not appear in /home/jenkins/minikube-integration/15565-3617/kubeconfig
	E0108 21:06:12.927350  135986 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/insufficient-storage-210603/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-210603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-210603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-210603: (5.880592346s)
--- PASS: TestInsufficientStorage (15.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (103.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1172310977.exe start -p running-upgrade-210759 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1172310977.exe start -p running-upgrade-210759 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m6.182427724s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-210759 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-210759 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.017238464s)
helpers_test.go:175: Cleaning up "running-upgrade-210759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-210759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-210759: (2.514773442s)
--- PASS: TestRunningBinaryUpgrade (103.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3500275885.exe start -p missing-upgrade-210733 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3500275885.exe start -p missing-upgrade-210733 --memory=2200 --driver=docker  --container-runtime=containerd: (1m18.143942243s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-210733

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-210733: (12.921459113s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-210733
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-210733 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-210733 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.698317803s)
helpers_test.go:175: Cleaning up "missing-upgrade-210733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-210733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-210733: (2.220367014s)
--- PASS: TestMissingContainerUpgrade (140.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210618 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-210618 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (102.618542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-210618] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210618 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210618 --driver=docker  --container-runtime=containerd: (32.814489377s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-210618 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-210619 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-210619 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (254.956442ms)

                                                
                                                
-- stdout --
	* [false-210619] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:06:19.318602  136819 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:06:19.318839  136819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:06:19.318849  136819 out.go:309] Setting ErrFile to fd 2...
	I0108 21:06:19.318856  136819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:06:19.319041  136819 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
	I0108 21:06:19.319649  136819 out.go:303] Setting JSON to false
	I0108 21:06:19.320712  136819 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2929,"bootTime":1673209051,"procs":418,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:06:19.320778  136819 start.go:135] virtualization: kvm guest
	I0108 21:06:19.323643  136819 out.go:177] * [false-210619] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:06:19.327059  136819 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 21:06:19.327237  136819 notify.go:220] Checking for updates...
	I0108 21:06:19.330945  136819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:06:19.332686  136819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
	I0108 21:06:19.334294  136819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
	I0108 21:06:19.336558  136819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:06:19.341115  136819 config.go:180] Loaded profile config "NoKubernetes-210618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:06:19.341279  136819 config.go:180] Loaded profile config "offline-containerd-210618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0108 21:06:19.341329  136819 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 21:06:19.370543  136819 docker.go:137] docker version: linux-20.10.22
	I0108 21:06:19.370649  136819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 21:06:19.479939  136819 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-01-08 21:06:19.393509943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 21:06:19.480072  136819 docker.go:254] overlay module found
	I0108 21:06:19.482764  136819 out.go:177] * Using the docker driver based on user configuration
	I0108 21:06:19.484090  136819 start.go:294] selected driver: docker
	I0108 21:06:19.484106  136819 start.go:838] validating driver "docker" against <nil>
	I0108 21:06:19.484130  136819 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:06:19.486780  136819 out.go:177] 
	W0108 21:06:19.488372  136819 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0108 21:06:19.490034  136819 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-210619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-210619
--- PASS: TestNetworkPlugins/group/false (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.3352144283.exe start -p stopped-upgrade-210618 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.3352144283.exe start -p stopped-upgrade-210618 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m19.726595407s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.3352144283.exe -p stopped-upgrade-210618 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.3352144283.exe -p stopped-upgrade-210618 stop: (1.249564813s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-210618 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-210618 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m16.277219993s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210618 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210618 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.905165101s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-210618 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-210618 status -o json: exit status 2 (405.863444ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-210618","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-210618
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-210618: (2.808378815s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210618 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210618 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.355650686s)
--- PASS: TestNoKubernetes/serial/Start (5.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-210618 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-210618 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.555147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.491739756s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-210618

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-210618: (2.084836767s)
--- PASS: TestNoKubernetes/serial/Stop (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-210618 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-210618 --driver=docker  --container-runtime=containerd: (6.621505736s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-210618 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-210618 "sudo systemctl is-active --quiet service kubelet": exit status 1 (405.452172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-210618
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestPause/serial/Start (47.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-210943 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-210943 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (47.975795127s)
--- PASS: TestPause/serial/Start (47.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-210618 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd
E0108 21:10:15.378355   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-210618 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (55.819322628s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-210943 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-210943 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.820576845s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-210943 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-210943 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-210943 --output=json --layout=cluster: exit status 2 (361.152458ms)

                                                
                                                
-- stdout --
	{"Name":"pause-210943","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-210943","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-210943 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-210943 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-210618 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-210618 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-rpgzh" [68408f57-7661-4b73-84e3-59089b9af47f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-rpgzh" [68408f57-7661-4b73-84e3-59089b9af47f] Running
E0108 21:10:56.111787   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00721525s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-210943 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-210943 --alsologtostderr -v=5: (2.950538837s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.081511781s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-210943
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-210943: exit status 1 (26.460835ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-210943

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-210618 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-210618 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-210618 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (57.220060735s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (91.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m31.522317074s)
--- PASS: TestNetworkPlugins/group/cilium/Start (91.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-4xkcz" [eb2fd23e-d62e-4be3-9d5f-fa11cfcbe771] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013881645s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-210619 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-210619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-kq8rs" [4c968659-e1d6-44c0-8a6c-7b7c70080018] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-kq8rs" [4c968659-e1d6-44c0-8a6c-7b7c70080018] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006937633s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-210619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-210619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-210619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (36.691796228s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-mrlgr" [c6002123-d27b-423a-96a5-45d0994872bc] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014520581s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-210619 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-210619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-v5r29" [4258af83-1a3a-47f2-a98b-de8493dd0649] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-v5r29" [4258af83-1a3a-47f2-a98b-de8493dd0649] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.010427161s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-210619 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-210619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-g8867" [3b976c17-cc5a-4d0b-a851-ca84e514bc07] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-g8867" [3b976c17-cc5a-4d0b-a851-ca84e514bc07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:12:57.125513   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-g8867" [3b976c17-cc5a-4d0b-a851-ca84e514bc07] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005529074s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-210619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-210619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-210619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-210619 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (38.369607473s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-210619 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-210619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-m6jq4" [a75d30f7-cc07-47ab-8bf9-11c7b9290b53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-m6jq4" [a75d30f7-cc07-47ab-8bf9-11c7b9290b53] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005801867s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-211950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-211950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (45.597584606s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-211950 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [60f2ac65-7787-489e-966f-6e90b6c75733] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [60f2ac65-7787-489e-966f-6e90b6c75733] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.011770565s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-211950 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-211950 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-211950 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-211950 --alsologtostderr -v=3
E0108 21:20:50.301081   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:20:56.112404   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:21:00.168672   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-211950 --alsologtostderr -v=3: (20.089894912s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-211950 -n embed-certs-211950
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-211950 -n embed-certs-211950: exit status 7 (101.030814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-211950 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (315.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-211950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:21:17.983937   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/auto-210618/client.crt: no such file or directory
E0108 21:21:59.211316   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:22:26.894894   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
E0108 21:22:39.301355   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:22:54.041128   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.046381   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.056622   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.076870   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.117175   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.197663   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.358039   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:54.678491   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:55.319389   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:56.600356   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:22:57.125911   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:22:59.160702   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:23:04.281254   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-211950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m14.656706552s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-211950 -n embed-certs-211950
E0108 21:26:20.533905   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/bridge-210619/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (315.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-4fjct" [8282daa7-69bd-4a39-8a11-d693502a7ce0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-4fjct" [8282daa7-69bd-4a39-8a11-d693502a7ce0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.013352192s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-4fjct" [8282daa7-69bd-4a39-8a11-d693502a7ce0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00580359s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-211950 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-211950 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-211950 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-211950 -n embed-certs-211950
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-211950 -n embed-certs-211950: exit status 2 (371.607674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-211950 -n embed-certs-211950
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-211950 -n embed-certs-211950: exit status 2 (361.509211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-211950 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-211950 -n embed-certs-211950
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-211950 -n embed-certs-211950
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-212639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:26:59.210628   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/kindnet-210619/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-212639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (46.848894719s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-212639 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-212639 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-212639 --alsologtostderr -v=3: (1.26502193s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-212639 -n newest-cni-212639
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-212639 -n newest-cni-212639: exit status 7 (96.065433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-212639 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-212639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E0108 21:27:39.301749   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/cilium-210619/client.crt: no such file or directory
E0108 21:27:54.041438   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/enable-default-cni-210619/client.crt: no such file or directory
E0108 21:27:57.124965   10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-212639 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (29.389451108s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-212639 -n newest-cni-212639
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-212639 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-212639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-212639 -n newest-cni-212639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-212639 -n newest-cni-212639: exit status 2 (364.988796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-212639 -n newest-cni-212639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-212639 -n newest-cni-212639: exit status 2 (372.94715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-212639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-212639 -n newest-cni-212639
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-212639 -n newest-cni-212639
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-211828 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-211828 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-211828 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-211828 --alsologtostderr -v=3: (1.285064036s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-211828 -n old-k8s-version-211828: exit status 7 (96.421328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-211828 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-211859 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-211859 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-211859 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-211859 --alsologtostderr -v=3: (1.272608188s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-211859 -n no-preload-211859: exit status 7 (91.627275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-211859 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-211952 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-211952 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-211952 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-211952 --alsologtostderr -v=3: (1.267528806s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-211952 -n default-k8s-diff-port-211952: exit status 7 (98.619617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-211952 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    

Test skip (23/268)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-210618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-210618
--- SKIP: TestNetworkPlugins/group/kubenet (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-210619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-210619
--- SKIP: TestNetworkPlugins/group/flannel (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-210619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-210619
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.22s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-211952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-211952
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard